venue
stringclasses 2
values | paper_content
stringlengths 7.54k
83.7k
| prompt
stringlengths 161
2.5k
| format
stringclasses 5
values | review
stringlengths 293
9.84k
|
---|---|---|---|---|
ICLR | Title
MetaPix: Few-Shot Video Retargeting
Abstract
We address the task of retargeting of human actions from one video to another. We consider the challenging setting where only a few frames of the target is available. The core of our approach is a conditional generative model that can transcode input skeletal poses (automatically extracted with an off-the-shelf pose estimator) to output target frames. However, it is challenging to build a universal transcoder because humans can appear wildly different due to clothing and background scene geometry. Instead, we learn to adapt – or personalize – a universal generator to the particular human and background in the target. To do so, we make use of meta-learning to discover effective strategies for on-the-fly personalization. One significant benefit of meta-learning is that the personalized transcoder naturally enforces temporal coherence across its generated frames; all frames contain consistent clothing and background geometry of the target. We experiment on in-the-wild internet videos and images and show our approach improves over widely-used baselines for the task.
N/A
We address the task of retargeting of human actions from one video to another. We consider the challenging setting where only a few frames of the target is available. The core of our approach is a conditional generative model that can transcode input skeletal poses (automatically extracted with an off-the-shelf pose estimator) to output target frames. However, it is challenging to build a universal transcoder because humans can appear wildly different due to clothing and background scene geometry. Instead, we learn to adapt – or personalize – a universal generator to the particular human and background in the target. To do so, we make use of meta-learning to discover effective strategies for on-the-fly personalization. One significant benefit of meta-learning is that the personalized transcoder naturally enforces temporal coherence across its generated frames; all frames contain consistent clothing and background geometry of the target. We experiment on in-the-wild internet videos and images and show our approach improves over widely-used baselines for the task.
1 INTRODUCTION
One of the hallmarks of human intelligence is the ability to imagine. For example, given an image of a never-before-seen person, one can easily imagine them performing different actions. To do so, we make use of years of experience watching humans act and interact with the world. We implicitly encode the rules of physical transformations of humans, objects, clothing and so on. Crucially, we effortlessly adapt or retarget those universal rules to a specific human and environment - a child on a playground will likely move differently than an adult walking into work. Our goal in this work is to develop models that similarly learn to generate human motions by specializing universal knowledge to a particular target human and target environment, given only a few samples of the target.
It is attractive to tackle such video generation tasks using the framework of generative (adversarial) neural networks (GANs). Past work has cast the core computational problem as one of conditional image generation where input source poses (automatically extracted with an off-the-shelf pose estimator) are transcoded into image frames (Balakrishnan et al., 2018; Siarohin et al., 2018; Ma et al., 2017). However, it is notoriously challenging to build generative models that are capable of synthesizing diverse, in-the-wild imagery. Notable exceptions make use of massively-large networks trained on large-scale compute infrastructure (Brock et al., 2019). However, modestly-sized generative networks perform quite well at synthesis of targeted domains (such as faces (Bansal et al., 2018) or facades (Isola et al., 2017)). A particularly successful approach to generating from pose-to-image is training of specialized – or personalized – models to particular scenes. These often require large-scale target datasets, such as 20 minutes of footage in a target lab setting (Chan et al., 2018)
The above approaches make use of personalization as an implicit but crucial ingredient, by on-the-fly training of a generative model tuned to the particular target domain of interest. Often, personalization is operationalized by fine-tuning a generic model on the specific target frames of interest. Our key insight is recasting personalization as an explicit component of a video-retargeting engine, allowing us to make use of meta-learning to learn how best to fine-tune (or personalize) a generic model to a particular target domain. We demonstrate that (meta)learning-to-fine-tune is particularly effective in the few-shot regime, where few target frames are available. From a technical perspective, one of our contributions is extending meta-learning to GANs, which is nontrivial because both a generator and discriminator need to be adversarially fine-tuned.
To that end, we propose MetaPix, a novel approach to personalization for video retargeting. Our formulation treats personalization as a few-shot learning problem, where the task is to adapt a generic generative model of human actions to a specific person given a few samples of their appearance. Our formulation is agnostic to the actual generative model used, and is compatible with both poseconditioned transfer (Balakrishnan et al., 2018) or generative (Chan et al., 2018) approaches. Taking inspiration from the recent successes of meta-learning approaches for few-shot tasks (Nichol et al., 2018; Finn et al., 2017), we propose a novel formulation by adapting the popular first-order metalearning algorithm Reptile (Nichol et al., 2018) for jointly learning initial weights for both the generator and discriminator. Hence, our model is optimized for efficient adaptation (personalization), given only a few samples and on a computational budget, and obtains stronger performance compared to a model not optimized in this form. Interestingly, we find this personalized model naturally enforces strong temporal coherence in the generated frames, even though it is not explicitly optimized for that task.
2 RELATED WORK
Deep generative modeling. There has been a growing interest in using deep networks for generative modeling of visual data, particularly images. Popular techniques include Variational Auto-Encoders (VAEs) (Kingma & Welling, 2014) and Generative Adversarial Networks (GANs) (Goodfellow et al., 2014). Particularly, GAN based techniques have shown strong performance for various tasks such as conditional image generation (Brock et al., 2019), image-to-image translation (Isola et al., 2017; Wang et al., 2018b; Zhu et al., 2017; Balakrishnan et al., 2018), unsupervised translation (Zhu et al., 2017) and domain adaptation (Hoffman et al., 2018). More recently, these techniques have been extended to video tasks, such as generation (Vondrick et al., 2016), future prediction (Finn et al., 2016) and translation (Bansal et al., 2018; Wang et al., 2018a). Our work explores generative modeling from a few samples, with our main focus being the task of video translation. There has been some prior work in this direction (Zakharov et al., 2019), though is largely limited to faces and portrait images.
Motion transfer and video retargeting. This refers to the task of driving a video of a person or a cartoon character given another video (Gleicher, 1998). While there exist some unsupervised techniques (Bansal et al., 2018) to do so, most successful approaches for articulated bodies involve using pose as an intermediate supervision. Recently, there have been two broad categories of approaches that have been employed for this task: 1) Learning to transform an image into another, given pose as input, either in 2D (Zhou et al., 2019; Balakrishnan et al., 2018; Siarohin et al., 2018; Ma et al., 2017) or 3D (Liu et al., 2018; Neverova et al., 2018; Walker et al., 2017). And 2) Learning a model to directly generate images given a pose as input (or, Pose2Im) (Chan et al., 2018). The former approaches tend to be more sophisticated, separately generating foreground and background pixels, and tend to perform slightly better than the latter. However, they typically learn a generic model across datasets that can transfer from a single frame, whereas the latter can learn a more holistic reconstruction by learning a specific model for a video. Our approach is complementary to such transfer approaches, and be applied on top of either, as we discuss in Section 3.
Few-shot learning. Low shot learning paradigms attempt to learn a model using very small amount of training data (Thrun, 1996), typically for visual recognition tasks. Classical approaches build generative models that share priors across the various categories (Fei-Fei et al., 2006; Salakhutdinov et al., 2012). Another category of approaches attempt to learn feature representations invariant to intra-class variations by using hallucinated data (Hariharan & Girshick, 2017; Wang et al., 2018c) or specialized training procedures/loss functions (Wang & Hebert, 2016a; Bart & Ullman, 2005). More recently, it has been framed as a ‘learning-to-learn’ or a meta-learning problem. The key idea is to directly optimize the model, for the eventual few-shot adaptation task, where the model is finetuned using a few examples (Finn et al., 2017). Alternatively, it has also been explored in form of directly predicting classifier weights (Bertinetto et al., 2016; Wang & Hebert, 2016b; Wang et al., 2017; Misra et al., 2017).
Meta Learning. The goal of metalearning is to learn models that are good at learning, similar to how humans are able to quickly and efficiently learn to do a new task. Many different approaches have been explored to that end. One direction involves learning weights through recurrent networks like LSTM (Hochreiter et al., 2001; Santoro et al., 2016; Duan et al., 2016). More commonly, meta-learning has been used as a way to learn an initialization for a network, that is finetuned at test time on a new task. A popular approach in this direction is MAML (Finn et al., 2017), where the parameters are directly optimized for the test time performance of the task it needs to adapt to. This is performed by backpropagating through the finetuning process by computing second order gradients. They and others (Andrychowicz et al., 2016) have also proposed first-order methods like FOMAML that forego the need to compute second order gradients, making it more efficient at empirically small drop in performance. However, most of these works still tend to have the requirement of SGD to be used as the task optimizer. A recently proposed meta-learning algorithm, Reptile (Nichol et al., 2018), forgoes that constraint by proposing a much simpler first order meta learning algorithm that is compatible with any black box optimizer.
3 OUR APPROACH
We now describe MetaPix in detail. To reiterate, our goal is to learn a generic model of human motion, parameterized by θ, that can quickly and efficiently be personalized for a specific person. We define speed and efficiency requirements in terms of two parameters: computation/iterations (T ) and the number samples required for personalization (K), respectively. We now describe the base architecture, MetaPix training setup, and the implementation details.
Base retargeting architecture. We build upon popular video retargeting architectures. Notably, there are two common approaches in literature:1) Learning a transformation from one image to another, conditioned on the pose (Zhou et al., 2019; Balakrishnan et al., 2018) and 2) Learning a mapping from pose to RGB (Pose2Im), like (Chan et al., 2018). Both obtain strong performance and amenable to the speed and efficiency constraints we are interested in. For example in K-shot setting (i.e. to learn a model using K frames), one can train the Pose2Im mapping using the K frames in the former case, or use the CK2 pairs from K frames to learn a transformation function from one of the
Algorithm 1 Meta-learning for video re-targeting for the Pose2Im setup. Initialize θD, θG from pretrained weights for iteration = 1, 2, ... do
Sample K pose image pairs from the same shot randomly Compute θ̃D, θ̃G = Pix2PixHDTK(θD, θG), for K images and T iterations Update θD = θD − (θ̃D − θD) Update θG = θG − (θ̃G − θG)
end for
K images to another in the latter case. They are also both compatible with our MetaPix optimization discussed next.
Pose2Im (Chan et al., 2018) approaches essentially build upon image-to-image translation methods (Isola et al., 2017; Wang et al., 2018b), where the input is a rendering of the body joints, and the output is an RGB image. The model consists of an encoder-decoder style generator G. It is trained using a combination of perceptual reconstruction losses (Johnson et al., 2016), implemented using an L1 penalty over VGG (Simonyan & Zisserman, 2015) features and discriminator losses, where we train a separate discriminator network D that is trained to differentiate the generated images from real images. The reconstruction loss forces it to be close to the ground truth, potentially leading to blurry outputs. Adding the discriminator helps fix that, as it forces the output onto the manifold of real images. Given its strong performance, we use Pix2PixHD (Wang et al., 2018b) as our base architecture for Pose2Im. For brevity, we skip a complete description of the model architecture, and refer the reader to (Wang et al., 2018b) for more details.
Pose Transfer (Balakrishnan et al., 2018; Zhou et al., 2019), on the other hand, takes a source image of a person and a target pose, and generates an image of the source person in that target pose. These approaches typically segment the limbs, transform their position as in the target pose, and generate the target image by combining the transformed limbs and segmented background by using a generative network like a U-Net (Ronneberger et al., 2015). These approaches can leverage learning to move pixels instead of having to generate color and background image from a learned representation. We utilize the Posewarp method (Balakrishnan et al., 2018) as our base Pose Transfer architecture due to available implementation.
MetaPix. MetaPix builds upon the base retargeting architecture by optimizing it for few-shot and fast adaptation for personalization. We achieve that by taking inspiration from the literature on few-shot learning, where meta-learning has shown promising results. We use a recently introduced first-order meta-learning technique, Reptile (Nichol et al., 2018). As compared to the more popular technique, MAML (Finn et al., 2017), it is more efficient as it does not compute a second gradient and is amenable to work with arbitrary optimizers as it does not need to backpropagate through the optimization process. Given that GAN architectures are hard to optimize, Reptile suits our purposes of its ability to use Adam (Kingma & Ba, 2015), the default optimizer for Pix2PixHD, as our task optimizer. Figure 2 illustrates the high level idea of our approach, which we describe in detail next.
We start with either a Pose2Im or a Pose Transfer trained base model. We then finetune this model as described in Algorithm 1. Note that Pix2PixHD is based on a GAN, so has two network weights to be optimized, the generator (θG) and discriminator (θD). In each meta-iteration, we sample a task: in our case a set of K frames from a new video to personalize to. We then finetune the current model parameter to that video over T iterations, and update the model parameters in the direction of the personalized parameters using a meta learning rate . We optimize both θD and θG jointly at each step. Note that Posewarp employs a more complicated two-stage training procedure, and we metalearn only the first stage (which has no discriminator) for simplicity.
Implementation Details. We implement MetaPix for the Pose2Im base model by building upon a public Pix2PixHD implementation1 in PyTorch, and perform all experiments on a 4 TITAN-X or GTX 1080Ti GPU node. We follow the hyperparameter setup as proposed in (Wang et al., 2018b). We represent the pose using a multi-channel heatmap image, and input and output are 512× 512px RGB
1https://github.com/NVIDIA/pix2pixHD/
images. The generator consists of 16 convolutional and deconvolutional layers, and is trained with a equally weighted combination of GAN, Feature Matching, and VGG losses. Initially, we pretrain the model on a large corpus of videos to learn a generic Pose2Im model as described in Section 4. During this pretraining stage, the model is trained on all of the training frames for 10 epochs using learning rate of 0.0002 and batch size of 8 distributed over the 4 GPUs. We experimented with multiple learning rates including 0.2, 0.02, 0.002; however, we observed that higher learning rates caused the training to diverge. When finetuning for personalization, given K frames and a computational budget T , we train the first T2 iterations using a constant learning rate of 0.0002, and the remaining iterations using a linear decay to 0, following (Wang et al., 2018b). The batch size is fixed to 8, and for K < 8, we repeat the frames to get 8 images for the batch. For the metalearning, we set the meta learning rate, = 1 with a linear decay to 0, and train 300 meta-iterations. We also experiment with meta learning rate, = 0.1, however, was much slower to converge. To potentially stabilize metatraining, we experiment with differing numbers of updates to the generator and discriminator during iterations of Alg. 1, as well as simplified objective functions. Recall that the GAN loss adds significant complexity due to the presence of a discriminator that need also be adversarially finetuned. In total, our metalearning takes 1 day of training time on 4 GPUs. For the Pose Transfer base model, we apply MetaPix in a similar fashion on top of Posewarp2, using the author provided pretrained weights. We will release the MetaPix source code for details.
4 EXPERIMENTS
We now experimentally evaluate MetaPix. We start by describing the datasets used and evaluation metrics. We then describe our base Pose2Im and Pose Transfer setup, followed by training that model using MetaPix. Finally, we analyze and ablate the various design choices in MetaPix.
4.1 DATASETS AND EVALUATION
We train and evaluate our approach on in-the-wild internet videos. Due to the lack of a standard benchmark for such retargeting tasks, we use the dataset as described in (Zhou et al., 2019) as our test set. These are a set of 8 videos downloaded from youtube, each 4-12 minutes long. We refer the reader to Figure 1 in (Zhou et al., 2019) for sample frames from this dataset. Additionally, we collect a set of 10 more dance videos from YouTube (distinct from the above 8), as our pre-training and meta-learning corpus. We provide the list of YouTube video IDs for both in the supplementary. Our models are only trained on these videos, and videos from (Zhou et al., 2019) are only used for personalization (using K frames) and evaluation. Figure 3 shows sample frames from these newly collected videos.
Evaluation and Metrics: Similar to (Zhou et al., 2019), we split each of the 8 test videos into a training and test sequence in 0.85:0.15 ratio, and sampleK training and 2000 test frames from the test sequence. We use the same metrics as in (Zhou et al., 2019) for ease of comparison: Mean Squared Error (MSE), Structured Similarity Index (SSIM) and Peak Signal-to-Noise Ratio (PSNR). Each of these are averaged over the 2000 test frames from each of the 8 test videos. To compare our baselines and our method, pose retargeting as a task aims to minimize MSE and maximize SSIM and PSNR.
4.2 EVALUATING METAPIX
We start by building our baseline retargeting model, based on Pix2PixHD (Wang et al., 2018b; Chan et al., 2018). To get a sense of the upper bound performance of our model, we train the model for each test video with no constraints on T or K, starting from the model pre-trained on our train set. Specifically, we use all the frames from the first 85% of each video, and train it for 10 epochs. We report the performance of this model in first section of Table 1 and show sample generations in second column of Figure 4. Since this model gets strong quantitative and qualitative performance, we stick with it as our base retargeting architecture through the rest of the experiments. We also employ a baseline retargeting model based on Posewarp for evaluation, but we focus on Pose2Im for further experimentation due to its relative simplicity.
Now we evaluate the performance of our model in constrained settings, where we want to learn to personalize given a few samples and in a constrained computational budget. Hence, we use a pretrained model on train set and a random model, and we personalize them by finetuning on each test video. As Table 1 shows, applying constraints leads to a drop in performance in all methods, as expected from using only 5 frames finetuned over 20 iterations. Finally, we compare that to the MetaPix model: in that case, we start from the pre-trained model, and do meta-learning on top of those parameters to optimize them for the transfer task as described in Section 3. That leads to a significant improvement over the pretrained model, showing the strength of MetaPix for this task.
In Figure 43, we visualize the predictions using the unconstrained model, as well as the constrained models trained using MetaPix and without, i.e. with simple pretraining. It is interesting to note that the meta-learned model is able to adapt to the color of the clothing and the background much better than a pretrained model, given the same frames for personalization. This reinforces MetaPix is a much better initialization for few-shot personalization than directly finetuning from a generic pretrained model. We further explore this quality of coherence in the next section.
4.3 ABLATIONS
We now ablate the key design choices in our MetaPix formulation. One of the strengths of our formulation is the explicit control on the supervision provided and computation the model is allowed to perform, and depending on the use-case, those parameters can easily be tweaked. We explore the effect of MetaPix on those parameters next on the Pose2Im base retargeting architecture.
Variation in K: We vary the amount of supervision for personalization, K, and evaluate its effect on the metrics in Figure 5. We compare the following models: a) Randomly initialized, b) Pretrained
2https://github.com/balakg/posewarp-cvpr2018 3Video visualization at https://youtu.be/NlUmsd9aU-4
on the train set, c) Trained using MetaPix for each value of K and tested with the same K, and d) Trained using MetaPix forK = 5 and tested at each value ofK. The last one tests the generalizability of MetaPix to different values of K at train and test time. We find that the MetaPix trained models consistently perform better than a simple pretrained model on all metrics. Notably, the model only trained for K = 5 is still able to obtain strong performance at different K values, showing the MetaPix trained model can generalize beyond the specific setup it is optimized for. The gap between the MetaPix trained model and the pretrained model tends to reduce with higher K, which is as expected: more data for personalization would likely reduce the importance of the initialization. However, there is a clear and significant gap for lower values of K, showing that MetaPix is highly effective for retargeting from few samples. In fact, we find that meta-learning is most effective for K = 1, corresponding to the challenging scenario of video-to-image retargeting.
Variation in T : Similar to variation in supervision, we experiment with varying the computation, or T , in Figure 6. We experiment with a similar set of baselines as in the case for K, and again observe that the MetaPix model consistently outperforms random initialization or pretraining on all metrics. Also, we see similar generalizability, as the model metatrained for T = 20 is able to perform well for other T values at test time too. The ability for MetaPix to generalize across K and T implies cost-effective strategies for training. The computational cost for training a meta-learner is dominated by fine-tuning, which scales linearly with K and T . Training with smaller values of both can result in significant speedups – up to 10× in our experiments. Variation of meta learning rate : We also experimented with changing the meta learning rate. At = 0.1 (K = 5, T = 200), we obtained SSIM=0.47, similar to what the pretrained model gets. Using our default = 1.0, improves performance to 0.51. Hence, a higher meta learning rate was imperative to see improvements with MetaPix.
Only training the generator: We apply Reptile in a GAN setting, where we jointly meta-optimize two networks. We also experimented with freezing one of the networks, specifically the discriminator, to the weights learned during pretraining. For our K = 5, T = 200, = 1.0 setup, we obtain similar performance as optimizing both, suggesting that a ‘universal’ discriminator might suffice for meta-learning on GANs.
Visualizing the dynamics of personalization: In order to examine the process of personalization, we visualize models obtained during iterations of finetuning, at 10, 20, 40, 80 and 200 iterations for 5 random test pose-image pairs. We compare both the pretrained and metalearned model, trained for k = 5, T = 200. Figure 74 shows images generated by these intermediate iterations. Both methods learn clothing details and background colors after 20 iterations. Interestingly, MetaPix produces images that are temporally coherent, even upon initialization, while the pretrained baseline produces images whose background and clothing vary with pose. This more coherent initialization appears to translate to more coherent generated images after personalization.
5 CONCLUSION
We have explored the task of quickly and efficiently retargeting human actions from one video to another, given a limited number of samples from the target domain. We formalize this as a few-shot personalization problem, where we first learn a generic generative model on large amounts of data, and then specialize it to a small amount of target frames via finetuning. We further propose a novel meta-learning based approach, MetaPix, to learn this generic model in a way that is more amenable to personalization via fine-tuning. To do so, we repurpose a first-order meta-learning algorithm, Reptile, to adversarially meta-optimize both the generator and discriminator of a generative adversarial network. We experiment with it on in-the-wild YouTube videos, and find that MetaPix outperforms widely-used approaches for pretraining, while generating temporally coherent videos.
Acknowledgements: This research is based upon work supported in part by NSF Grant 1618903, the Intel Science and Technology Center for Visual Cloud Systems (ISTC-VCS), and Google. | 1. What is the main contribution of the paper in the field of video frame generation?
2. How does the proposed approach utilize meta-learning, particularly Reptile, in video frame generation?
3. What are the strengths of the paper regarding its clarity and demonstration of the proposed method's effectiveness?
4. Do you have any concerns about the originality of the work, considering it's an application of existing methods?
5. How does the reviewer assess the significance of the problem setting addressed by the paper? | Review | Review
This submission proposes an application of meta-learning to video frame generation modeling conditioned on human pose information, in order to allow the model to adapt to the context of each video. This context is provided in the form of a support set of K pairs of pose/frame images for the video. Reptile is used as the meta-learning method, and applied to two recently proposed video-frame generative networks (Pix2PixHD and Posewarp). In both cases, results show that Reptile is able to produce better adaptive models, i.e. models that when fine-tuned on the support set produce better image frames.
Though the originality of the work is somewhat weak (it's a relatively straightforward application of Reptile to Pix2PixHD and Posewarp), the problem setting is novel and I find the demonstration that Reptile works well in this setting interesting and valuable. The paper is also clearly written and easy to follow. For these reasons, I'm personally leaning towards recommending to accept this submission. |
ICLR | Title
CaptainGAN: Navigate Through Embedding Space For Better Text Generation
Abstract
Score-function-based text generation approaches such as REINFORCE, in general, suffer from high computational complexity and training instability problems. This is mainly due to the non-differentiable nature of the discrete space sampling and thus these methods have to treat the discriminator as a reward function and ignore the gradient information. In this paper, we propose a novel approach, CaptainGAN, which adopts the straight-through gradient estimator and introduces a ”re-centered” gradient estimation technique to steer the generator toward better text tokens through the embedding space. Our method is stable to train and converges quickly without maximum likelihood pre-training. On multiple metrics of text quality and diversity, our method outperforms existing GAN-based methods on natural language generation.
1 Introduction
Generative Adversarial Networks (GAN) (Goodfellow et al., 2014) have led to many advances in image generation, image editing, style transfer, and representation learning (Karras et al., 2017; Brock et al., 2018; Karras et al., 2019). Unsurprisingly, much effort has been devoted to adopting the GAN framework for unsupervised text generation (Yu et al., 2017; Che et al., 2017; Balagopalan et al., 2018; Fedus et al., 2018; Guo et al., 2018; de Masson d’Autume et al., 2019; Zhang et al., 2017; Nie et al., 2019). However, as finding a Nash equilibrium is not as straightforward as finding a local optima, researchers have been forced to develop many ad-hoc tricks and techniques to make GAN training well-behaved. In the text generation setting, they are also faced with the additional obstacle of passing discrete tokens through a non-differentiable operation, which prohibits back-propagating the gradient signal to the generator. To address the issue of non-differentiablity, researchers and practitioners use score function gradient estimators such as REINFORCE to train GANs for text generation, where the discriminator is cast as a reward function for the generator. However, these methods still suffer from poor sample efficiency due to the credit assignment problem. We argue that it is disadvantageous to utilize the discriminator as simply a reward function when it is known that gradient-based backpropagation is a far more efficient way to perform credit assignment. In this paper, we propose a novel unsupervised text generation technique, called CaptainGAN, which propagates a modified gradient signal from discriminator to generator in order to improve the efficiency and accuracy of the estimator. Our contributions are as follows:
• An update procedure for the generator to incorporate gradient information from the discriminator during generator training.
• Lower memory and computational requirements than other RL-based counterparts.
• Near SOTA results without maximum likelihood pretraining.
Please see Appendix A for a detailed description of the notation used in this paper.
2 Background
The Generative Adversarial Network (GAN) proposed in Goodfellow et al. (2014) is an innovative approach to the generative modeling problem. Rather than using the maximum likelihood estimation (MLE) directly to learn a probabilistic model, the GAN is a two-player minimax game in which the goal of one player, the generator Gθ, is to generate samples x̂ from pθ, and the goal of the other player, the discriminator Dϕ, is to learn to classify whether or not a sample was generated from real data pdata or the generator.
VD = Ex∼pdata [logDϕ(x)] + Ex̂∼pθ [log(1−Dϕ(x̂))] (1) VG = Ex̂∼pθ [log(Dϕ(x̂))] (2)
where VD and VG are respectively the objective functions of the discriminator Dϕ and the generator Gθ. Equation 2 is the alternative generator loss suggested by the original work as its gradient does not vanish when Dϕ(x̂) is small. In the standard GAN architecture, the generator’s output is directly connected as the input to the discriminator in a fully differentiable manner, which means the gradients from the discriminator’s loss function can be back-propagated to the parameters of the generator. However, it requires sampling a sequence of tokens from a discrete distribution in text generation, which is essentially non-differentiable. In order to avoid the intractability of the gradients, text GANs resort to some sort of estimation.
2.1 Continuous Relaxation
Continuous relaxation approaches such as Gumbel-Softmax (Jang et al., 2016) approximate a stochastic categorical distribution in terms of a deterministic continuous function. While this apparently allows us to remove the non-differentiable discrete sampling altogether (Kusner & Hernández-Lobato, 2016; Nie et al., 2019), it creates several serious issues. The continuous distribution generates the expectation of embeddings - a weighted sum with no direct correspondence to an exact word or token. This means all the discriminator has to do is spot the difference the actual word embeddings and expectation. In turn, the generator will try to compensate by producing extremely ”spiky” predictions. Furthermore, this way of generating data creates a major inconsistency in that during inference , the generator has to sample a discrete sequence from distribution whereas during training, it’s only trained to generate an expectation that is feasible to discriminator.
2.2 Score-Function Gradient Estimator
The score-function gradient estimator (Fu, 2006; Glynn, 1990), also known as the REINFORCE (Williams, 1992) is a common solution for non-differentiable issue as mentioned above. Applying the REINFORCE algorithm, the gradient of the expectation of reward function fϕ can be written as
∂
∂θ Ex̂∼pθ [fϕ(x̂)] = Ex̂∼pθ [fϕ(x̂)
∂
∂θ log pθ(x̂)]. (3)
Since it does not require fϕ to be differentiable or even continuous as a function of x, the gradient of Ex̂∼pθ [fϕ(x̂)] can be back-propagated to the generator Gθ. In the context of GANs, Ex̂∼pθ [fϕ(x̂)] can be seen as the objective function of the generator and the reward function fϕ can be replaced with Dϕ. Although REINFORCE is an unbiased estimator, it still has a number of disadvantages such as high variance, low sample efficiency and the credit assignment problem. Therefore, much effort is devoted to reducing the variance using special methods (Gu et al., 2016; Grathwohl et al., 2017), or to providing more dense rewards such as in Yu et al. (2017); Che et al. (2017); Fedus et al. (2018); de Masson d’Autume et al. (2019), where Monte-Carlo roll-outs are used to obtain per-word rewards.
2.3 Straight-Through Gradient Estimator
Another approach is using a straight-through gradient estimator (Bengio et al., 2013; Jang et al., 2016). The basic idea is to perform non-differentiable operation during the forward pass, but approximate it with a differentiable proxy during the backward pass. Consider sampling from a categorical distribution (notation borrowed from Jang et al. (2016)): The following operation is performed during the forward pass
z = one-hot(x̂) (4)
where a categorical sample x̂ is encoded as a v-dimensional one-hot vector z (v being the given vocabulary size). During the backward pass, the corresponding straight-through estimator of function f with respect to θ is
ĝθ = ( ∂f ∂z ) ⊤ ∂mθ ∂θ (5)
derived by using the approximation∂z∂θ ≈ ∂mθ ∂θ , where mθ is a differentiable proxy for the non-differentiable one-hot encoding vector z (Jang et al., 2016). Moreover, Jang et al. (2016) suggest the choice of
pθ = [pθ(x1), . . . , pθ(xv)]⊤ (6)
as proxy, where {x1, . . . , xv} are all possible tokens in vocabulary V . Compared to the former method, this approach updates all the possible tokens’ probability, even if the tokens were not sampled.
3 Proposed Method
The non-differentiable connection between the generator Gθ and the discriminator Dϕ is our primary focus in this paper (Figure 1). Starting from the straight through gradient estimator, we derive a re-centered estimator which helps the generator learn to select better tokens.
3.1 Gradient Estimation in the Generator
Consider a discriminator whose first layer is an embedding lookup layer. During forward propagation, the operation below is performed first:
ex̂ = E.lookup(x̂) = Ez (7)
where E = [e1, . . . , ev] and z is one-hot encoding of vocabulary. As described in Section 2.3, this is a non-differentiable categorical sampling operation with a straight-through estimator
of the gradient of VG with respect to θ, formulated as follows:
ĝθ = T∑ t=1 ( ∂VG ∂zt )⊤ ∂pt,θ ∂θ = T∑ t=1 (E⊤ ∂VG ∂e (ex̂t)) ⊤ ∂pt,θ ∂θ
(8)
= T∑ t=1 ∑ x∈V ⟨ex, ∂VG ∂e (ex̂t)⟩ ∂p(x|x̂1:t−1) ∂θ
(9)
where x̂t is the selected token at time-step t, T is the maximum time-step and V is the given vocabulary. The first term of Equation 8 can be derived from Equation 7 by applying the chain rule.
3.2 Re-centered Estimator
We turn our attention to the first term of Equation 9:
⟨ex, ∂VG ∂e (ex̂t)⟩ (10)
Denoting it as δx̂t→x[VG], Equation 9 becomes
ĝθ = T∑ t=1 ∑ x∈V δx̂t→x[VG] ∂pθ(x|x̂1:t−1) ∂θ . (11)
The coefficient δx̂t→x[VG] can be considered as an affinity factor indicating how much the discriminator is ‘attracted’ to the token x. Recall that the gradient ∂VG∂e (ex̂) has the direction of greatest increase of the objective VG at ex̂. Since this direction is relative to ex̂, we suggest taking the inner product with the vector ex − ex̂ which has been re-centered relative to ex̂ (Figure 2).
⟨ex − ex̂t , ∂VG ∂e (ex̂t)⟩ (12)
This inner product approximates the increase of VG when x̂t is replaced by x. Because this approximation only holds in the neighborhood of ex̂t , |ex − ex̂t | ≪ 1, we also add a kernel to dampen the affinity factor for ex that are far away. That is,
δRCx̂t→x[VG] = K(ex, ex̂t)⟨ex − ex̂t , ∂VG ∂e (ex̂t)⟩ (13)
where
K(u,v) = 1√ 1 + |u − v|2 . (14)
See Appendix B for more details. As a result, Equation 9 becomes
ĝRCθ = T∑ t=1 ∑ x∈V δRCx̂t→x[VG] ∂p(x|x̂1:t−1) ∂θ . (15)
In score-function based approaches, the generator is only given feedback on the tokens it samples, so it requires a large amount of trial-and-error in order to figure out how to write realistic sentences. To use an analogy, imagine a captain at sea (embedding space) searching for treasure. Each iteration of the game, the captain chooses random locations in the space and receives feedback regarding the choices. Score-function based approaches tell us how good the location is but say nothing about how to improve it. In contrast, our method, gives the captain a navigational direction to the next best location. Even if the direction is biased, it should allow the captain to find the treasures with fewer iterations.
4 Evaluation Metrics
We evaluate our model’s ability to generate realistic and diverse texts with BLEU-based metric, Fréchet Embedding Distance and language model based metric.
4.1 BLEU and Self-BLEU
BLEU (Papineni et al., 2002) and Self-BLEU (Zhu et al., 2018) capture text quality and diversity respectively. BLEU, a modified form of n-gram precision, measures local consistency between a set of reference and a set of candidate texts. Self-BLEU is a version of BLEU in which both the reference and candidate texts are drawn from the same corpus. A high Self-BLEU means that members within the corpus are too highly correlated, indicating a lack of diversity.
4.2 Fréchet Embedding Distance
Fréchet Embedding Distance (FED) (de Masson d’Autume et al., 2019) measures sentimental similarity, global consistency and diversity between reference and candidate texts. Semeniuta et al. (2018) claims that Fréchet Distance (FD) is not sensitive to the choice of embedding model. However, we notice significant discrepancies between FD scores calculated using different embedding models. The details are discussed in Appendix G.
4.3 Language Model Scoring Methods
Following Caccia et al. (2018); Zhao et al. (2017), we also evaluate the quality and the diversity of generated samples by Language Model Perplexity (LM) and Reverse Language Model Perplexity (RLM). Language Model Score (LM). We use a language model trained on real data to estimate the negative log likelihood per word of generated sentences. Samples which the language model considers improper, such as ungrammatical or nonsense sentences, have high perplexity. Reverse Language Model Score (RLM). If we instead train a language model on generated data while evaluating on real data, we can judge the diversity of the generated samples. Low-diversity generated data leads to an overfitted model which generalizes poorly on real data, as indicated by a high perplexity.
5 Results & Discussion
In this section, we use two datasets to evaluate CaptainGAN: COCO Image Captions1 and EMNLP 2017 News 2. We discuss our results from different perspectives via the evaluation metrics presented in the last section. Furthermore, an ablation study is performed in order to know the contribution of each technique. More details about our experiment setup, datasets, architecture and training techniques such as temperature control are described in Appendix D.
5.1 Quality and Diversity
On metrics of local text consistency and diversity, CaptainGAN significantly outperforms prior GAN-based models which rely heavily on pre-training, with the exception of RelGAN. The BLEU/Self-BLEU temperature sweeps shown in Figure 3a for CaptainGAN indicate that CaptainGAN approaches the performance of a language model. On the other hand, in Figure 3b, the language model scores show similar results for CaptainGAN and MLE model. Results for COCO are shown in Appendix E.
5.2 Temperature Sensitivity
CaptainGAN is robust to temperature variations. This is demonstrated in Figure 4a, which depicts how the softmax temperature influences the FED score. Regardless of model, changes in softmax temperature are certain to affect the FED, however, CaptainGAN shows the least variation.
5.3 Global Consistency
CaptainGAN improves global consistency comparing to prior works and language model on LM perplexity and FED score, respectively. As shown in Table 1, our method significantly reduces the gap of perplexity between GAN-based method and language model, which is directly trained to minimize the perplexity. Besides, we show that CaptainGAN outperforms MLE trained model on FED score as shown in Table 1.
1The preprocessed COCO dataset is available at https://github.com/pclucas14/ GansFallingShort/tree/master/real_data_experiments/data/coco
2The preprocessed EMNLP dataset is available at https://github.com/pclucas14/ GansFallingShort/tree/master/real_data_experiments/data/news
Model Train FED Val. FED LM Score
5.4 Matching N-grams of Training Data
Text generating models are often criticized for simply memorizing texts seen during training instead of learning how to model the true data distribution. We show the longest matching ngrams between text from the training data and samples in Figure 4b. CaptainGAN generates more samples with small matching n-grams (n < 5) comparing to MLE model, implying that our model can generate novel sentences without resorting to memorizing phrases from the training data.
5.5 CaptainGan As A Language Model
CaptainGAN’s strong performance on the evaluation metrics described earlier shows that it is capable of generating realistic text. Since the generator in our case is a RNN which can produce conditional probabilities at each timestep, this raises the question of whether or not an explicit language model is being learned. We feed real data to our generator and calculate the perplexity to measure the generator’s suitability as a language model. Surprisingly, as shown in Table 2, the perplexity of the generator on both training and validation data is unusually high. A similar phenomenon was noted in de Masson d’Autume et al. (2019), though not as extreme. We explain this phenomenon as follows: MLE models are trained specifically for low perplexity, since they minimize KL divergence. However, CaptainGAN minimizes alternative loss (Equation 2), which is similar to minimizing reverse KL divergence (Arjovsky & Bottou,
2017). This objective assigns an extremely low cost to mode dropping and does not force the generator to mimic all aspects of real data. This can result in poor modeling of the likelihood, but does not necessarily lead to poor sample generation. (Theis et al., 2016; Hashimoto et al., 2019). Following 5.4, we conclude that our model’s ability to generate realistic text cannot be the result of simply plagiarizing training samples.
Model Train Perp. Val. Perp.
5.6 Comparison to score function-based methods
Our method is able to outperform score function-based approaches using small batch sizes. Score function-based approaches often rely on large batch sizes for variance reduction (de Masson d’Autume et al., 2019), but this does not appear to be necessary for our recentered estimator. This confirms the reasoning at the end of Section 3.2, and greatly reduces our memory requirements during training. In addition, since gradients are naturally calculated for every timestep, there is no need for Monte-Carlo estimation of rewards like in score function-based approaches. Because our generator receives a richer learning signal, we use a 5:1 ratio for discriminator-generator updating. This greatly lessens our computational requirements.
5.7 Ablation Study
To understand the impact of each component of the CaptainGAN, we conduct an ablation study. In Table 3, we show the influence of adding new features, namely spectral normalization, re-center estimator, pretrained embeddings and discriminator regularization decoupled weight decay (Loshchilov & Hutter, 2018) and one-sided label smoothing (Salimans et al., 2016), by scoring FED on validation data. We observe that:
• As shown in Zhou et al. (2019), an unrestricted objective leads to an uninformative gradient. Without spectral normalization, the Lipschitz constant of the discriminator is unbounded making it difficult to update the generator with a gradient-based approach.
• Rather unexpectedly, pretrained embeddings do not lead to any significant improvement, which is surprising given our method’s dependence on the discriminator’s embedding space. The embeddings which the discriminator learns from scratch are capable of giving the generator an effective learning signal.
• The application of the re-centered estimator is responsible for a 20% improvement in FED, confirming its effectiveness.
• Equation 14 is dependent on the norm of the embeddings, which explains the effectiveness of discriminator weight regularization at improving FED.
6 Conclusion and Future Work
In this work we have presented CaptainGAN, an effective gradient-based method for quickly training a text generating GAN without pretraining. Starting from the straight-through estimator, we derive a re-centered gradient estimator that improves the quality/diversity of generated texts as measured by various standard metrics. In future work, we plan to investigate our estimator in more theoretical detail and further decrease its bias.
Appendix A Notation
Symbol Meaning
V = {x1, . . . , xv} a predefined vocabulary x a token belongs to V x̂ a token sampled from V x a sequence of tokens belong to V x̂ a sequence of tokens sampled from V ⊤ a transpose operation
Appendix B The Importance of Kernel
To verify the necessity of the kernel term (Equation 14), we directly apply Equation 12 to Equation 11 as follows: ∑
x∈V ⟨ex − ex̂,
∂VG ∂e (ex̂)⟩ ∂pθ(x) ∂θ
= ∑ x∈V ⟨ex, ∂VG ∂e (ex̂)⟩ ∂pθ(x) ∂θ
− ∑ x∈V ⟨ex̂, ∂VG ∂e (ex̂)⟩ ∂pθ(x) ∂θ (16)
where x̂ is the selected token and x is a tokens in the vocabulary V.
Let f(x) = ⟨ex, ∂VG∂e (ex̂)⟩ and b = ⟨ex̂, ∂VG ∂e (ex̂)⟩. b does not depend on x and is constant in the summation. ∑ x∈V (f(x) + b) ∂pθ(x) ∂θ
= ∂ ∂θ [ ∑ x∈V (f(x) + b)pθ(x)]
= ∂ ∂θ [( ∑ x∈V f(x)pθ(x)) + b( ∑ x∈V pθ(x))]
= ∂ ∂θ [( ∑ x∈V f(x)pθ(x)) + b ∗ 1]
= ∂
∂θ ∑ x∈V f(x)pθ(x) (17)
Without the kernel, the update to θ will be the same as prior work and thus carries the same drawback.
Appendix C Model Structure
We use a pre-trained word embedding of 300 dimensions to initialize all embedding weights. The embeddings are pre-trained using the fastText library Bojanowski et al. (2016) on the corresponding training data of each task.
C.1 Generator
The purpose of the first dense layer is to project the GRU cell outputs into the embedding space. The second dense layer converts the projected embedding vectors to logits over the tokens and is initialized as the transpose of the embedding weights. At the first timestep, the input to the embedding layer is the start-of-sentence token. The vocabulary also contains an end-of-sentence token. If the generator outputs this token at any timestep, the sentence ends and its length is set to this timestep.
To perform consistent convolution on sentences of different lengths, we use a special masking mechanism on all layers. For all convolution and pooling layers, we mask out input features where the receptive field is out-of-bounds with respect to the unpadded input sentence.
Appendix D Experimental details
D.1 Setup
D.1.1 Datasets
We use two datasets COCO Image Caption dataset Chen et al. (2015) and EMNLP 2017 News dataset3. Results are reported on EMNLP 2017 News if not specified. For each datasets, 10k sentences are set aside as validation data. COCO Image Captions. Sentences are limited in length to 24 tokens. The vocabulary size is 4.6k tokens. Training data consists of 10k sentences4. EMNLP 2017 News. Sentences in this dataset are much longer and more complicated, with a maximum length of 50 tokens and a vocabulary of 5.7k words. Training data consists of 300k sentences. 5
D.1.2 Architecture and Techniques
Our architecture is described in Appendix C. Spectral normalization of the discriminator’s weights Miyato et al. (2018) is critical to our method as without it the gradients are too unstable and convergence becomes impossible (Section 5.7).
D.1.3 Temperature Control
Following Caccia et al. (2018), we adjust the softmax temperature parameter at sampling time to measure the trade-off between quality and diversity. Increasing the temperature lowers the differences between softmax probabilities, which leads to diverse but low-quality samples. On the other hand, reducing the temperature leads to high-quality yet low-diversity samples.
D.2 training details
The discriminator and generator updates are performed with a 5:1 ratio. We use Adam Kingma & Ba (2014) with learning rate = 5.0 · 10−5, β1 = 0.5, and β2 = 0.999. Decoupled weight decay regularization Loshchilov & Hutter (2018) is applied to all variables in the discriminator, and λ = 0.05 · learning rate. All our models are trained using an NVIDIA GTX 1080 Ti. To evaluate samples on BLEU-5, LM versus RLM, and FED, we use 10k validation sentences as reference texts for both COCO and EMNLP 2017 News. For Self-BLEU, we randomly select half of the 10k samples as reference texts and leave the remainder as target texts. A smoothing function is used on BLEU-based metrics. See Appendix D.4 for more details. The maximum likelihood model architecture in this paper follows the setting of de Masson d’Autume et al. (2019). We could not reproduce the exact results reported in de Masson d’Autume et al. (2019), which we believe stems from discrepancies between our trained language model and theirs.
D.3 One-sided label smoothing
One-sided label smoothing has been shown to reduce the vulnerability of neural networks to adversarial examples and it is strongly recommended for GAN training Salimans et al. (2016). Therefore, the first term of Equation 1 is reformulated as
Ex∼pdata [α logDϕ(x) + (1− α) log(1−Dϕ(x))] (18) where α = 0.9 in our experiments.
3http://www.statmt.org/wmt17/ 4The preprocessed COCO dataset is available at https://github.com/pclucas14/ GansFallingShort/tree/master/real_data_experiments/data/coco 5The preprocessed EMNLP dataset is available at https://github.com/pclucas14/ GansFallingShort/tree/master/real_data_experiments/data/news
D.4 BLEU smoothing
One of the issue with BLEU is that in the case that a higher order n-gram precision of a sentence is 0, then the BLEU score will be 0, resulting in severely underestimation. This is due to the fact that BLEU is calculated by the geometric mean of precision. To solve this, we applied the smoothing technique as follows:
m ′
n = ϵ if mn = 0 (19)
where mn is the original match count, m ′
n is the one used during BLEU calculation. We’ve picked the smoothing factor ϵ = 0.1 as proposed by Chen & Cherry (2014).
Appendix E COCO Results
See Figure 6.
Appendix F Sentence Length Consistency
The sentence length of samples which are generated by CaptainGAN is highly related to the training data, as shown in Figure 7. This suggests that CaptainGAN is able to fully learn the distribution of sentence length by gradient estimator without additional mechanism like positional encoding or dense reward.
Appendix G Details on Fréchet Embedding Distance
Calculation of the FED score depends on the Universal Sentence EncoderCer et al. (2018). However, the version of Universal Sentence Encoder6 used in de Masson d’Autume et al. (2019) is incompatible with TensorflowAbadi et al. (2015) version 1.7 or newer. Furthermore, it would be infeasible to limit the experimentation environment to such ancient version of Tensorflow. Therefore, as a work around, we decide to report our FED score, as shown in Table 6, with Universal Sentence Encoder Large7 instead, which is compatible with the current version of Tensorflow (1.14 as of Jul 2019). While there are many benefits to using the FED metric, it is not without drawback. A drawback that we’ve observed is that the FED score will be significantly underestimated
6https://tfhub.dev/google/universal-sentence-encoder/2 7https://tfhub.dev/google/universal-sentence-encoder-large/3
Model Train FED Val. FED
whenever we are using a small number of samples, as shown in Figure 8. Therefore, we’ve ensured that sufficient samples are used in our evaluation.
Appendix H Generated samples
Training samples of COCO and EMNLP 2017 News can be found in Table 7. Randomly picked samples from MLE model and CaptainGAN are available in Table 8 and Table 9.
COCO
- a person standing in a boat on the water . - three sheep are grazing on the city sidewalk . - kenya airways airplane wing , engine and cabin on the tarmac . - light poles in the snow with yellow traffic lights mounted on them . - we are looking at the floor between the toilet and the wall . - someone taking a photo of a small residential bathroom . - a cat sits on the seat of a bicycle and looks down at another cat on a snowy day . - large set of motorcycles all lined up down a street . EMNLP 2017 News - apple clearly doesn ’ t want to see the dollar value of its u . k . earnings fall by a similar amount . - we do look at the number of hours we produce , and measure that against the religious make - up of society . - the u . s . has struggled to find an effective ground force to take on isis in syria , where president obama has ruled out a u . s . ground combat role . - it is a happy and kind world that we live in on this show and that is where i hope we can live in real life . - while men and women may end up earning roughly the same amount in the same jobs , men are more likely to end up in higher - paying roles in the tech industry . - but while they were beaten by a better side , the tie did reveal what i think has been city ’ s biggest problem this season : they have lost the ability to score against good teams . - according to facebook ’ s policies , accounts can be suspended if law enforcement believe individuals are at risk of harm . - she has to take personal responsibility for this - when she was health secretary she was told by those who know best that her decision to cut student places would have a damaging impact .
Table 7: Training samples on EMNLP 2017 News and COCO dataset.
COCO
- the bathroom bowl with focus on the vanity . - a man racer in a car brushing a carved cake . - a lady riding a motorcycle and a street while sitting on a pedestal track . - a commercial man are standing in the bicycle next to the back of an orange computer . - an airplane the bus is headed as something in the sky . - people riding on a bike on a surfer surfing doing the ocean . - a plane propped up in front of a vehicle . - a man wearing a cat in front of a motorcycle . EMNLP 2017 News - a one - game event and a limited number this dropped by 38 per cent to find out what we could want , to be as much done as we finish as the games . - more than 30 per cent of calais camps at risk were taken between children and child refugees crossing the mediterranean to fall apart . - the uk government got on the deal when it came to other eu countries that allowed workplace checks for 26 . - so women i didn ’ t want my parents to work hard with me at the time of the task of making it presents . - black voters feel there may not be enough choice between former president george w . bush and i have to go along with problems and support the people that are in a swing state for president . - he sold out to its eighth list , but a stunning showing that what cost the ability to start the league failed ? - let ’ s catch him down his back phone one morning john f kennedy , where he is in a us meeting group with supporters of love not being successful in the fields . - then there ’ s a couple of chances that they won competition - - the result from what is of me going out there that i really will play for the foundation .
COCO
- a black motorcycle parked next to a river . - a bathroom with a mirror , a shower and a toilet . - two people sit on a building in front of a building . - a group of people walking around the corner of a building . - people are riding bikes on a city street . - there is a picture of a bathroom in a bathroom . - an empty city street with cars parked on the side of the road . - a man taking a picture of a white bathroom . EMNLP 2017 News - if i go to the party , it ’ s a disaster , and i ’ m going to keep it right . - i think i ’ ve had some great strength in my life and i want that to be here for me . - obama pointed to reporters claiming he was willing to discuss the situation , but the u . s . have been military concern by american troops , including a nuclear deal in march . - a 16 - year - old man was arrested after the death of a man shot and arrested him on suspicion on his behalf . - the problem is that they are hard to find out what they want to see where they are and what they can achieve their past two years . - now it ’ s important to me and i am thinking so hard to make it the best for a long time . - great britain is suggesting the u . s . - led coalition figures at the australian foreign investor have raised more than $ 250 , 000 in the construction of operations . - it is a simple way to believe that if it can happen , then i ’ m not the right choice .
Table 9: Randomly selected CaptainGAN samples on EMNLP 2017 News and COCO with temperature = 1.0. | 1. What is the focus of the paper, and what are the proposed contributions?
2. What are the strengths of the paper regarding its writing quality and evaluation thoroughness?
3. Are there any concerns or suggestions regarding the presentation of the baselines and comparisons?
4. How can the authors improve the reporting of their method's performance to enhance the readers' understanding of its stability and significance?
5. What are some potential issues or limitations with the provided explanations for the high perplexity scores, and how could they be addressed? | Review | Review
The authors propose CaptainGAN, a method using the straight-through gradient estimator to improve training of the generator for text generation.
The paper is well-written and the evaluation seems thorough, comparing to relevant baselines.
Comments:
Figure 3: the caption refers to Caccia et al. for results on LeakGAN, MaliGAN and seqGAN, but unless I’ve missed it, RelGAN hasn’t yet been introduced by name as a baseline? The citation is given in the opening part of the introduction, in an enumeration, but isn’t revisited later in the text - not even here where the results of the model are introduced. Given that it seems, according to the presented results, to be the most competitive of the GAN models that the authors are comparing to, maybe it’s worth adding more contextual information on RelGAN to the Background section?
For their method, the authors should report an average performance over several random seeds and provide the standard deviation / confidence intervals, for the readers to be able to assess the stability of the method and the significance of the improvement reported in the results.
I find Section 5.5. particularly interesting, as well as the reported perplexity in Table 2. The authors provide 3 bullet points to explain the unusually high perplexity of the generator on the training and validation data. I feel that the explanations that are given are at the moment vague and not visibly backed by data, therefore being speculative. Obviously, point 1) is hard to quantify - but point 2) could possibly be at least partially quantified - if the hypothesis is that names, places, punctuation marks etc play an important role in the reported perplexity score, then maybe the authors could test this by correlating model perplexity on sentences with whether those sentences contain these types of words? |
ICLR | Title
CaptainGAN: Navigate Through Embedding Space For Better Text Generation
Abstract
Score-function-based text generation approaches such as REINFORCE, in general, suffer from high computational complexity and training instability problems. This is mainly due to the non-differentiable nature of the discrete space sampling and thus these methods have to treat the discriminator as a reward function and ignore the gradient information. In this paper, we propose a novel approach, CaptainGAN, which adopts the straight-through gradient estimator and introduces a ”re-centered” gradient estimation technique to steer the generator toward better text tokens through the embedding space. Our method is stable to train and converges quickly without maximum likelihood pre-training. On multiple metrics of text quality and diversity, our method outperforms existing GAN-based methods on natural language generation.
1 Introduction
Generative Adversarial Networks (GAN) (Goodfellow et al., 2014) have led to many advances in image generation, image editing, style transfer, and representation learning (Karras et al., 2017; Brock et al., 2018; Karras et al., 2019). Unsurprisingly, much effort has been devoted to adopting the GAN framework for unsupervised text generation (Yu et al., 2017; Che et al., 2017; Balagopalan et al., 2018; Fedus et al., 2018; Guo et al., 2018; de Masson d’Autume et al., 2019; Zhang et al., 2017; Nie et al., 2019). However, as finding a Nash equilibrium is not as straightforward as finding a local optima, researchers have been forced to develop many ad-hoc tricks and techniques to make GAN training well-behaved. In the text generation setting, they are also faced with the additional obstacle of passing discrete tokens through a non-differentiable operation, which prohibits back-propagating the gradient signal to the generator. To address the issue of non-differentiablity, researchers and practitioners use score function gradient estimators such as REINFORCE to train GANs for text generation, where the discriminator is cast as a reward function for the generator. However, these methods still suffer from poor sample efficiency due to the credit assignment problem. We argue that it is disadvantageous to utilize the discriminator as simply a reward function when it is known that gradient-based backpropagation is a far more efficient way to perform credit assignment. In this paper, we propose a novel unsupervised text generation technique, called CaptainGAN, which propagates a modified gradient signal from discriminator to generator in order to improve the efficiency and accuracy of the estimator. Our contributions are as follows:
• An update procedure for the generator to incorporate gradient information from the discriminator during generator training.
• Lower memory and computational requirements than other RL-based counterparts.
• Near SOTA results without maximum likelihood pretraining.
Please see Appendix A for a detailed description of the notation used in this paper.
2 Background
The Generative Adversarial Network (GAN) proposed in Goodfellow et al. (2014) is an innovative approach to the generative modeling problem. Rather than using the maximum likelihood estimation (MLE) directly to learn a probabilistic model, the GAN is a two-player minimax game in which the goal of one player, the generator Gθ, is to generate samples x̂ from pθ, and the goal of the other player, the discriminator Dϕ, is to learn to classify whether or not a sample was generated from real data pdata or the generator.
VD = Ex∼pdata [logDϕ(x)] + Ex̂∼pθ [log(1−Dϕ(x̂))] (1) VG = Ex̂∼pθ [log(Dϕ(x̂))] (2)
where VD and VG are respectively the objective functions of the discriminator Dϕ and the generator Gθ. Equation 2 is the alternative generator loss suggested by the original work as its gradient does not vanish when Dϕ(x̂) is small. In the standard GAN architecture, the generator’s output is directly connected as the input to the discriminator in a fully differentiable manner, which means the gradients from the discriminator’s loss function can be back-propagated to the parameters of the generator. However, it requires sampling a sequence of tokens from a discrete distribution in text generation, which is essentially non-differentiable. In order to avoid the intractability of the gradients, text GANs resort to some sort of estimation.
2.1 Continuous Relaxation
Continuous relaxation approaches such as Gumbel-Softmax (Jang et al., 2016) approximate a stochastic categorical distribution in terms of a deterministic continuous function. While this apparently allows us to remove the non-differentiable discrete sampling altogether (Kusner & Hernández-Lobato, 2016; Nie et al., 2019), it creates several serious issues. The continuous distribution generates the expectation of embeddings - a weighted sum with no direct correspondence to an exact word or token. This means all the discriminator has to do is spot the difference the actual word embeddings and expectation. In turn, the generator will try to compensate by producing extremely ”spiky” predictions. Furthermore, this way of generating data creates a major inconsistency in that during inference , the generator has to sample a discrete sequence from distribution whereas during training, it’s only trained to generate an expectation that is feasible to discriminator.
2.2 Score-Function Gradient Estimator
The score-function gradient estimator (Fu, 2006; Glynn, 1990), also known as the REINFORCE (Williams, 1992) is a common solution for non-differentiable issue as mentioned above. Applying the REINFORCE algorithm, the gradient of the expectation of reward function fϕ can be written as
∂
∂θ Ex̂∼pθ [fϕ(x̂)] = Ex̂∼pθ [fϕ(x̂)
∂
∂θ log pθ(x̂)]. (3)
Since it does not require fϕ to be differentiable or even continuous as a function of x, the gradient of Ex̂∼pθ [fϕ(x̂)] can be back-propagated to the generator Gθ. In the context of GANs, Ex̂∼pθ [fϕ(x̂)] can be seen as the objective function of the generator and the reward function fϕ can be replaced with Dϕ. Although REINFORCE is an unbiased estimator, it still has a number of disadvantages such as high variance, low sample efficiency and the credit assignment problem. Therefore, much effort is devoted to reducing the variance using special methods (Gu et al., 2016; Grathwohl et al., 2017), or to providing more dense rewards such as in Yu et al. (2017); Che et al. (2017); Fedus et al. (2018); de Masson d’Autume et al. (2019), where Monte-Carlo roll-outs are used to obtain per-word rewards.
2.3 Straight-Through Gradient Estimator
Another approach is using a straight-through gradient estimator (Bengio et al., 2013; Jang et al., 2016). The basic idea is to perform non-differentiable operation during the forward pass, but approximate it with a differentiable proxy during the backward pass. Consider sampling from a categorical distribution (notation borrowed from Jang et al. (2016)): The following operation is performed during the forward pass
z = one-hot(x̂) (4)
where a categorical sample x̂ is encoded as a v-dimensional one-hot vector z (v being the given vocabulary size). During the backward pass, the corresponding straight-through estimator of function f with respect to θ is
ĝθ = ( ∂f ∂z ) ⊤ ∂mθ ∂θ (5)
derived by using the approximation∂z∂θ ≈ ∂mθ ∂θ , where mθ is a differentiable proxy for the non-differentiable one-hot encoding vector z (Jang et al., 2016). Moreover, Jang et al. (2016) suggest the choice of
pθ = [pθ(x1), . . . , pθ(xv)]⊤ (6)
as proxy, where {x1, . . . , xv} are all possible tokens in vocabulary V . Compared to the former method, this approach updates all the possible tokens’ probability, even if the tokens were not sampled.
3 Proposed Method
The non-differentiable connection between the generator Gθ and the discriminator Dϕ is our primary focus in this paper (Figure 1). Starting from the straight through gradient estimator, we derive a re-centered estimator which helps the generator learn to select better tokens.
3.1 Gradient Estimation in the Generator
Consider a discriminator whose first layer is an embedding lookup layer. During forward propagation, the operation below is performed first:
ex̂ = E.lookup(x̂) = Ez (7)
where E = [e1, . . . , ev] and z is one-hot encoding of vocabulary. As described in Section 2.3, this is a non-differentiable categorical sampling operation with a straight-through estimator
of the gradient of VG with respect to θ, formulated as follows:
ĝθ = T∑ t=1 ( ∂VG ∂zt )⊤ ∂pt,θ ∂θ = T∑ t=1 (E⊤ ∂VG ∂e (ex̂t)) ⊤ ∂pt,θ ∂θ
(8)
= T∑ t=1 ∑ x∈V ⟨ex, ∂VG ∂e (ex̂t)⟩ ∂p(x|x̂1:t−1) ∂θ
(9)
where x̂t is the selected token at time-step t, T is the maximum time-step and V is the given vocabulary. The first term of Equation 8 can be derived from Equation 7 by applying the chain rule.
3.2 Re-centered Estimator
We turn our attention to the first term of Equation 9:
⟨ex, ∂VG ∂e (ex̂t)⟩ (10)
Denoting it as δx̂t→x[VG], Equation 9 becomes
ĝθ = T∑ t=1 ∑ x∈V δx̂t→x[VG] ∂pθ(x|x̂1:t−1) ∂θ . (11)
The coefficient δx̂t→x[VG] can be considered as an affinity factor indicating how much the discriminator is ‘attracted’ to the token x. Recall that the gradient ∂VG∂e (ex̂) has the direction of greatest increase of the objective VG at ex̂. Since this direction is relative to ex̂, we suggest taking the inner product with the vector ex − ex̂ which has been re-centered relative to ex̂ (Figure 2).
⟨ex − ex̂t , ∂VG ∂e (ex̂t)⟩ (12)
This inner product approximates the increase of VG when x̂t is replaced by x. Because this approximation only holds in the neighborhood of ex̂t , |ex − ex̂t | ≪ 1, we also add a kernel to dampen the affinity factor for ex that are far away. That is,
δRCx̂t→x[VG] = K(ex, ex̂t)⟨ex − ex̂t , ∂VG ∂e (ex̂t)⟩ (13)
where
K(u,v) = 1√ 1 + |u − v|2 . (14)
See Appendix B for more details. As a result, Equation 9 becomes
ĝRCθ = T∑ t=1 ∑ x∈V δRCx̂t→x[VG] ∂p(x|x̂1:t−1) ∂θ . (15)
In score-function based approaches, the generator is only given feedback on the tokens it samples, so it requires a large amount of trial-and-error in order to figure out how to write realistic sentences. To use an analogy, imagine a captain at sea (embedding space) searching for treasure. Each iteration of the game, the captain chooses random locations in the space and receives feedback regarding the choices. Score-function based approaches tell us how good the location is but say nothing about how to improve it. In contrast, our method, gives the captain a navigational direction to the next best location. Even if the direction is biased, it should allow the captain to find the treasures with fewer iterations.
4 Evaluation Metrics
We evaluate our model’s ability to generate realistic and diverse texts with BLEU-based metric, Fréchet Embedding Distance and language model based metric.
4.1 BLEU and Self-BLEU
BLEU (Papineni et al., 2002) and Self-BLEU (Zhu et al., 2018) capture text quality and diversity respectively. BLEU, a modified form of n-gram precision, measures local consistency between a set of reference and a set of candidate texts. Self-BLEU is a version of BLEU in which both the reference and candidate texts are drawn from the same corpus. A high Self-BLEU means that members within the corpus are too highly correlated, indicating a lack of diversity.
4.2 Fréchet Embedding Distance
Fréchet Embedding Distance (FED) (de Masson d’Autume et al., 2019) measures sentimental similarity, global consistency and diversity between reference and candidate texts. Semeniuta et al. (2018) claims that Fréchet Distance (FD) is not sensitive to the choice of embedding model. However, we notice significant discrepancies between FD scores calculated using different embedding models. The details are discussed in Appendix G.
4.3 Language Model Scoring Methods
Following Caccia et al. (2018); Zhao et al. (2017), we also evaluate the quality and the diversity of generated samples by Language Model Perplexity (LM) and Reverse Language Model Perplexity (RLM). Language Model Score (LM). We use a language model trained on real data to estimate the negative log likelihood per word of generated sentences. Samples which the language model considers improper, such as ungrammatical or nonsense sentences, have high perplexity. Reverse Language Model Score (RLM). If we instead train a language model on generated data while evaluating on real data, we can judge the diversity of the generated samples. Low-diversity generated data leads to an overfitted model which generalizes poorly on real data, as indicated by a high perplexity.
5 Results & Discussion
In this section, we use two datasets to evaluate CaptainGAN: COCO Image Captions1 and EMNLP 2017 News 2. We discuss our results from different perspectives via the evaluation metrics presented in the last section. Furthermore, an ablation study is performed in order to know the contribution of each technique. More details about our experiment setup, datasets, architecture and training techniques such as temperature control are described in Appendix D.
5.1 Quality and Diversity
On metrics of local text consistency and diversity, CaptainGAN significantly outperforms prior GAN-based models which rely heavily on pre-training, with the exception of RelGAN. The BLEU/Self-BLEU temperature sweeps shown in Figure 3a for CaptainGAN indicate that CaptainGAN approaches the performance of a language model. On the other hand, in Figure 3b, the language model scores show similar results for CaptainGAN and MLE model. Results for COCO are shown in Appendix E.
5.2 Temperature Sensitivity
CaptainGAN is robust to temperature variations. This is demonstrated in Figure 4a, which depicts how the softmax temperature influences the FED score. Regardless of model, changes in softmax temperature are certain to affect the FED, however, CaptainGAN shows the least variation.
5.3 Global Consistency
CaptainGAN improves global consistency comparing to prior works and language model on LM perplexity and FED score, respectively. As shown in Table 1, our method significantly reduces the gap of perplexity between GAN-based method and language model, which is directly trained to minimize the perplexity. Besides, we show that CaptainGAN outperforms MLE trained model on FED score as shown in Table 1.
1The preprocessed COCO dataset is available at https://github.com/pclucas14/ GansFallingShort/tree/master/real_data_experiments/data/coco
2The preprocessed EMNLP dataset is available at https://github.com/pclucas14/ GansFallingShort/tree/master/real_data_experiments/data/news
Model Train FED Val. FED LM Score
5.4 Matching N-grams of Training Data
Text generating models are often criticized for simply memorizing texts seen during training instead of learning how to model the true data distribution. We show the longest matching ngrams between text from the training data and samples in Figure 4b. CaptainGAN generates more samples with small matching n-grams (n < 5) comparing to MLE model, implying that our model can generate novel sentences without resorting to memorizing phrases from the training data.
5.5 CaptainGan As A Language Model
CaptainGAN’s strong performance on the evaluation metrics described earlier shows that it is capable of generating realistic text. Since the generator in our case is a RNN which can produce conditional probabilities at each timestep, this raises the question of whether or not an explicit language model is being learned. We feed real data to our generator and calculate the perplexity to measure the generator’s suitability as a language model. Surprisingly, as shown in Table 2, the perplexity of the generator on both training and validation data is unusually high. A similar phenomenon was noted in de Masson d’Autume et al. (2019), though not as extreme. We explain this phenomenon as follows: MLE models are trained specifically for low perplexity, since they minimize KL divergence. However, CaptainGAN minimizes alternative loss (Equation 2), which is similar to minimizing reverse KL divergence (Arjovsky & Bottou,
2017). This objective assigns an extremely low cost to mode dropping and does not force the generator to mimic all aspects of real data. This can result in poor modeling of the likelihood, but does not necessarily lead to poor sample generation. (Theis et al., 2016; Hashimoto et al., 2019). Following 5.4, we conclude that our model’s ability to generate realistic text cannot be the result of simply plagiarizing training samples.
Model Train Perp. Val. Perp.
5.6 Comparison to score function-based methods
Our method is able to outperform score function-based approaches using small batch sizes. Score function-based approaches often rely on large batch sizes for variance reduction (de Masson d’Autume et al., 2019), but this does not appear to be necessary for our recentered estimator. This confirms the reasoning at the end of Section 3.2, and greatly reduces our memory requirements during training. In addition, since gradients are naturally calculated for every timestep, there is no need for Monte-Carlo estimation of rewards like in score function-based approaches. Because our generator receives a richer learning signal, we use a 5:1 ratio for discriminator-generator updating. This greatly lessens our computational requirements.
5.7 Ablation Study
To understand the impact of each component of the CaptainGAN, we conduct an ablation study. In Table 3, we show the influence of adding new features, namely spectral normalization, re-center estimator, pretrained embeddings and discriminator regularization decoupled weight decay (Loshchilov & Hutter, 2018) and one-sided label smoothing (Salimans et al., 2016), by scoring FED on validation data. We observe that:
• As shown in Zhou et al. (2019), an unrestricted objective leads to an uninformative gradient. Without spectral normalization, the Lipschitz constant of the discriminator is unbounded making it difficult to update the generator with a gradient-based approach.
• Rather unexpectedly, pretrained embeddings do not lead to any significant improvement, which is surprising given our method’s dependence on the discriminator’s embedding space. The embeddings which the discriminator learns from scratch are capable of giving the generator an effective learning signal.
• The application of the re-centered estimator is responsible for a 20% improvement in FED, confirming its effectiveness.
• Equation 14 is dependent on the norm of the embeddings, which explains the effectiveness of discriminator weight regularization at improving FED.
6 Conclusion and Future Work
In this work we have presented CaptainGAN, an effective gradient-based method for quickly training a text generating GAN without pretraining. Starting from the straight-through estimator, we derive a re-centered gradient estimator that improves the quality/diversity of generated texts as measured by various standard metrics. In future work, we plan to investigate our estimator in more theoretical detail and further decrease its bias.
Appendix A Notation
Symbol Meaning
V = {x1, . . . , xv} a predefined vocabulary x a token belongs to V x̂ a token sampled from V x a sequence of tokens belong to V x̂ a sequence of tokens sampled from V ⊤ a transpose operation
Appendix B The Importance of Kernel
To verify the necessity of the kernel term (Equation 14), we directly apply Equation 12 to Equation 11 as follows: ∑
x∈V ⟨ex − ex̂,
∂VG ∂e (ex̂)⟩ ∂pθ(x) ∂θ
= ∑ x∈V ⟨ex, ∂VG ∂e (ex̂)⟩ ∂pθ(x) ∂θ
− ∑ x∈V ⟨ex̂, ∂VG ∂e (ex̂)⟩ ∂pθ(x) ∂θ (16)
where x̂ is the selected token and x is a tokens in the vocabulary V.
Let f(x) = ⟨ex, ∂VG∂e (ex̂)⟩ and b = ⟨ex̂, ∂VG ∂e (ex̂)⟩. b does not depend on x and is constant in the summation. ∑ x∈V (f(x) + b) ∂pθ(x) ∂θ
= ∂ ∂θ [ ∑ x∈V (f(x) + b)pθ(x)]
= ∂ ∂θ [( ∑ x∈V f(x)pθ(x)) + b( ∑ x∈V pθ(x))]
= ∂ ∂θ [( ∑ x∈V f(x)pθ(x)) + b ∗ 1]
= ∂
∂θ ∑ x∈V f(x)pθ(x) (17)
Without the kernel, the update to θ will be the same as prior work and thus carries the same drawback.
Appendix C Model Structure
We use a pre-trained word embedding of 300 dimensions to initialize all embedding weights. The embeddings are pre-trained using the fastText library Bojanowski et al. (2016) on the corresponding training data of each task.
C.1 Generator
The purpose of the first dense layer is to project the GRU cell outputs into the embedding space. The second dense layer converts the projected embedding vectors to logits over the tokens and is initialized as the transpose of the embedding weights. At the first timestep, the input to the embedding layer is the start-of-sentence token. The vocabulary also contains an end-of-sentence token. If the generator outputs this token at any timestep, the sentence ends and its length is set to this timestep.
To perform consistent convolution on sentences of different lengths, we use a special masking mechanism on all layers. For all convolution and pooling layers, we mask out input features where the receptive field is out-of-bounds with respect to the unpadded input sentence.
Appendix D Experimental details
D.1 Setup
D.1.1 Datasets
We use two datasets COCO Image Caption dataset Chen et al. (2015) and EMNLP 2017 News dataset3. Results are reported on EMNLP 2017 News if not specified. For each datasets, 10k sentences are set aside as validation data. COCO Image Captions. Sentences are limited in length to 24 tokens. The vocabulary size is 4.6k tokens. Training data consists of 10k sentences4. EMNLP 2017 News. Sentences in this dataset are much longer and more complicated, with a maximum length of 50 tokens and a vocabulary of 5.7k words. Training data consists of 300k sentences. 5
D.1.2 Architecture and Techniques
Our architecture is described in Appendix C. Spectral normalization of the discriminator’s weights Miyato et al. (2018) is critical to our method as without it the gradients are too unstable and convergence becomes impossible (Section 5.7).
D.1.3 Temperature Control
Following Caccia et al. (2018), we adjust the softmax temperature parameter at sampling time to measure the trade-off between quality and diversity. Increasing the temperature lowers the differences between softmax probabilities, which leads to diverse but low-quality samples. On the other hand, reducing the temperature leads to high-quality yet low-diversity samples.
D.2 training details
The discriminator and generator updates are performed with a 5:1 ratio. We use Adam Kingma & Ba (2014) with learning rate = 5.0 · 10−5, β1 = 0.5, and β2 = 0.999. Decoupled weight decay regularization Loshchilov & Hutter (2018) is applied to all variables in the discriminator, and λ = 0.05 · learning rate. All our models are trained using an NVIDIA GTX 1080 Ti. To evaluate samples on BLEU-5, LM versus RLM, and FED, we use 10k validation sentences as reference texts for both COCO and EMNLP 2017 News. For Self-BLEU, we randomly select half of the 10k samples as reference texts and leave the remainder as target texts. A smoothing function is used on BLEU-based metrics. See Appendix D.4 for more details. The maximum likelihood model architecture in this paper follows the setting of de Masson d’Autume et al. (2019). We could not reproduce the exact results reported in de Masson d’Autume et al. (2019), which we believe stems from discrepancies between our trained language model and theirs.
D.3 One-sided label smoothing
One-sided label smoothing has been shown to reduce the vulnerability of neural networks to adversarial examples and it is strongly recommended for GAN training Salimans et al. (2016). Therefore, the first term of Equation 1 is reformulated as
Ex∼pdata [α logDϕ(x) + (1− α) log(1−Dϕ(x))] (18) where α = 0.9 in our experiments.
3http://www.statmt.org/wmt17/ 4The preprocessed COCO dataset is available at https://github.com/pclucas14/ GansFallingShort/tree/master/real_data_experiments/data/coco 5The preprocessed EMNLP dataset is available at https://github.com/pclucas14/ GansFallingShort/tree/master/real_data_experiments/data/news
D.4 BLEU smoothing
One of the issue with BLEU is that in the case that a higher order n-gram precision of a sentence is 0, then the BLEU score will be 0, resulting in severely underestimation. This is due to the fact that BLEU is calculated by the geometric mean of precision. To solve this, we applied the smoothing technique as follows:
m ′
n = ϵ if mn = 0 (19)
where mn is the original match count, m ′
n is the one used during BLEU calculation. We’ve picked the smoothing factor ϵ = 0.1 as proposed by Chen & Cherry (2014).
Appendix E COCO Results
See Figure 6.
Appendix F Sentence Length Consistency
The sentence length of samples which are generated by CaptainGAN is highly related to the training data, as shown in Figure 7. This suggests that CaptainGAN is able to fully learn the distribution of sentence length by gradient estimator without additional mechanism like positional encoding or dense reward.
Appendix G Details on Fréchet Embedding Distance
Calculation of the FED score depends on the Universal Sentence EncoderCer et al. (2018). However, the version of Universal Sentence Encoder6 used in de Masson d’Autume et al. (2019) is incompatible with TensorflowAbadi et al. (2015) version 1.7 or newer. Furthermore, it would be infeasible to limit the experimentation environment to such ancient version of Tensorflow. Therefore, as a work around, we decide to report our FED score, as shown in Table 6, with Universal Sentence Encoder Large7 instead, which is compatible with the current version of Tensorflow (1.14 as of Jul 2019). While there are many benefits to using the FED metric, it is not without drawback. A drawback that we’ve observed is that the FED score will be significantly underestimated
6https://tfhub.dev/google/universal-sentence-encoder/2 7https://tfhub.dev/google/universal-sentence-encoder-large/3
Model Train FED Val. FED
whenever we are using a small number of samples, as shown in Figure 8. Therefore, we’ve ensured that sufficient samples are used in our evaluation.
Appendix H Generated samples
Training samples of COCO and EMNLP 2017 News can be found in Table 7. Randomly picked samples from MLE model and CaptainGAN are available in Table 8 and Table 9.
COCO
- a person standing in a boat on the water . - three sheep are grazing on the city sidewalk . - kenya airways airplane wing , engine and cabin on the tarmac . - light poles in the snow with yellow traffic lights mounted on them . - we are looking at the floor between the toilet and the wall . - someone taking a photo of a small residential bathroom . - a cat sits on the seat of a bicycle and looks down at another cat on a snowy day . - large set of motorcycles all lined up down a street . EMNLP 2017 News - apple clearly doesn ’ t want to see the dollar value of its u . k . earnings fall by a similar amount . - we do look at the number of hours we produce , and measure that against the religious make - up of society . - the u . s . has struggled to find an effective ground force to take on isis in syria , where president obama has ruled out a u . s . ground combat role . - it is a happy and kind world that we live in on this show and that is where i hope we can live in real life . - while men and women may end up earning roughly the same amount in the same jobs , men are more likely to end up in higher - paying roles in the tech industry . - but while they were beaten by a better side , the tie did reveal what i think has been city ’ s biggest problem this season : they have lost the ability to score against good teams . - according to facebook ’ s policies , accounts can be suspended if law enforcement believe individuals are at risk of harm . - she has to take personal responsibility for this - when she was health secretary she was told by those who know best that her decision to cut student places would have a damaging impact .
Table 7: Training samples on EMNLP 2017 News and COCO dataset.
COCO
- the bathroom bowl with focus on the vanity . - a man racer in a car brushing a carved cake . - a lady riding a motorcycle and a street while sitting on a pedestal track . - a commercial man are standing in the bicycle next to the back of an orange computer . - an airplane the bus is headed as something in the sky . - people riding on a bike on a surfer surfing doing the ocean . - a plane propped up in front of a vehicle . - a man wearing a cat in front of a motorcycle . EMNLP 2017 News - a one - game event and a limited number this dropped by 38 per cent to find out what we could want , to be as much done as we finish as the games . - more than 30 per cent of calais camps at risk were taken between children and child refugees crossing the mediterranean to fall apart . - the uk government got on the deal when it came to other eu countries that allowed workplace checks for 26 . - so women i didn ’ t want my parents to work hard with me at the time of the task of making it presents . - black voters feel there may not be enough choice between former president george w . bush and i have to go along with problems and support the people that are in a swing state for president . - he sold out to its eighth list , but a stunning showing that what cost the ability to start the league failed ? - let ’ s catch him down his back phone one morning john f kennedy , where he is in a us meeting group with supporters of love not being successful in the fields . - then there ’ s a couple of chances that they won competition - - the result from what is of me going out there that i really will play for the foundation .
COCO
- a black motorcycle parked next to a river . - a bathroom with a mirror , a shower and a toilet . - two people sit on a building in front of a building . - a group of people walking around the corner of a building . - people are riding bikes on a city street . - there is a picture of a bathroom in a bathroom . - an empty city street with cars parked on the side of the road . - a man taking a picture of a white bathroom . EMNLP 2017 News - if i go to the party , it ’ s a disaster , and i ’ m going to keep it right . - i think i ’ ve had some great strength in my life and i want that to be here for me . - obama pointed to reporters claiming he was willing to discuss the situation , but the u . s . have been military concern by american troops , including a nuclear deal in march . - a 16 - year - old man was arrested after the death of a man shot and arrested him on suspicion on his behalf . - the problem is that they are hard to find out what they want to see where they are and what they can achieve their past two years . - now it ’ s important to me and i am thinking so hard to make it the best for a long time . - great britain is suggesting the u . s . - led coalition figures at the australian foreign investor have raised more than $ 250 , 000 in the construction of operations . - it is a simple way to believe that if it can happen , then i ’ m not the right choice .
Table 9: Randomly selected CaptainGAN samples on EMNLP 2017 News and COCO with temperature = 1.0. | 1. What is the primary issue addressed by the paper in the context of Generative Adversarial Networks (GANs)?
2. What is the novel approach proposed by the authors to tackle the identified problem?
3. How does the proposed method improve the performance of the generator in GANs?
4. What are the experimental results that support the effectiveness of the proposed approach?
5. Are there any limitations or potential drawbacks associated with the proposed method? | Review | Review
This paper attempts to solve the problem of non-differentiable connection between the generation and discriminator of a GAN. The authors come up with an estimator of the gradient for the generator from the gradient of the discriminator, which was disconnected previously. With this change, the model should be able to select better tokens than random selection, which could leads to more robust training. The experiment results on both COCO Image Captions and EMNLP 2017 News datasets justify the authors' argument. |
ICLR | Title
CaptainGAN: Navigate Through Embedding Space For Better Text Generation
Abstract
Score-function-based text generation approaches such as REINFORCE, in general, suffer from high computational complexity and training instability problems. This is mainly due to the non-differentiable nature of the discrete space sampling and thus these methods have to treat the discriminator as a reward function and ignore the gradient information. In this paper, we propose a novel approach, CaptainGAN, which adopts the straight-through gradient estimator and introduces a ”re-centered” gradient estimation technique to steer the generator toward better text tokens through the embedding space. Our method is stable to train and converges quickly without maximum likelihood pre-training. On multiple metrics of text quality and diversity, our method outperforms existing GAN-based methods on natural language generation.
1 Introduction
Generative Adversarial Networks (GAN) (Goodfellow et al., 2014) have led to many advances in image generation, image editing, style transfer, and representation learning (Karras et al., 2017; Brock et al., 2018; Karras et al., 2019). Unsurprisingly, much effort has been devoted to adopting the GAN framework for unsupervised text generation (Yu et al., 2017; Che et al., 2017; Balagopalan et al., 2018; Fedus et al., 2018; Guo et al., 2018; de Masson d’Autume et al., 2019; Zhang et al., 2017; Nie et al., 2019). However, as finding a Nash equilibrium is not as straightforward as finding a local optima, researchers have been forced to develop many ad-hoc tricks and techniques to make GAN training well-behaved. In the text generation setting, they are also faced with the additional obstacle of passing discrete tokens through a non-differentiable operation, which prohibits back-propagating the gradient signal to the generator. To address the issue of non-differentiablity, researchers and practitioners use score function gradient estimators such as REINFORCE to train GANs for text generation, where the discriminator is cast as a reward function for the generator. However, these methods still suffer from poor sample efficiency due to the credit assignment problem. We argue that it is disadvantageous to utilize the discriminator as simply a reward function when it is known that gradient-based backpropagation is a far more efficient way to perform credit assignment. In this paper, we propose a novel unsupervised text generation technique, called CaptainGAN, which propagates a modified gradient signal from discriminator to generator in order to improve the efficiency and accuracy of the estimator. Our contributions are as follows:
• An update procedure for the generator to incorporate gradient information from the discriminator during generator training.
• Lower memory and computational requirements than other RL-based counterparts.
• Near SOTA results without maximum likelihood pretraining.
Please see Appendix A for a detailed description of the notation used in this paper.
2 Background
The Generative Adversarial Network (GAN) proposed in Goodfellow et al. (2014) is an innovative approach to the generative modeling problem. Rather than using the maximum likelihood estimation (MLE) directly to learn a probabilistic model, the GAN is a two-player minimax game in which the goal of one player, the generator Gθ, is to generate samples x̂ from pθ, and the goal of the other player, the discriminator Dϕ, is to learn to classify whether or not a sample was generated from real data pdata or the generator.
VD = Ex∼pdata [logDϕ(x)] + Ex̂∼pθ [log(1−Dϕ(x̂))] (1) VG = Ex̂∼pθ [log(Dϕ(x̂))] (2)
where VD and VG are respectively the objective functions of the discriminator Dϕ and the generator Gθ. Equation 2 is the alternative generator loss suggested by the original work as its gradient does not vanish when Dϕ(x̂) is small. In the standard GAN architecture, the generator’s output is directly connected as the input to the discriminator in a fully differentiable manner, which means the gradients from the discriminator’s loss function can be back-propagated to the parameters of the generator. However, it requires sampling a sequence of tokens from a discrete distribution in text generation, which is essentially non-differentiable. In order to avoid the intractability of the gradients, text GANs resort to some sort of estimation.
2.1 Continuous Relaxation
Continuous relaxation approaches such as Gumbel-Softmax (Jang et al., 2016) approximate a stochastic categorical distribution in terms of a deterministic continuous function. While this apparently allows us to remove the non-differentiable discrete sampling altogether (Kusner & Hernández-Lobato, 2016; Nie et al., 2019), it creates several serious issues. The continuous distribution generates the expectation of embeddings - a weighted sum with no direct correspondence to an exact word or token. This means all the discriminator has to do is spot the difference the actual word embeddings and expectation. In turn, the generator will try to compensate by producing extremely ”spiky” predictions. Furthermore, this way of generating data creates a major inconsistency in that during inference , the generator has to sample a discrete sequence from distribution whereas during training, it’s only trained to generate an expectation that is feasible to discriminator.
2.2 Score-Function Gradient Estimator
The score-function gradient estimator (Fu, 2006; Glynn, 1990), also known as the REINFORCE (Williams, 1992) is a common solution for non-differentiable issue as mentioned above. Applying the REINFORCE algorithm, the gradient of the expectation of reward function fϕ can be written as
∂
∂θ Ex̂∼pθ [fϕ(x̂)] = Ex̂∼pθ [fϕ(x̂)
∂
∂θ log pθ(x̂)]. (3)
Since it does not require fϕ to be differentiable or even continuous as a function of x, the gradient of Ex̂∼pθ [fϕ(x̂)] can be back-propagated to the generator Gθ. In the context of GANs, Ex̂∼pθ [fϕ(x̂)] can be seen as the objective function of the generator and the reward function fϕ can be replaced with Dϕ. Although REINFORCE is an unbiased estimator, it still has a number of disadvantages such as high variance, low sample efficiency and the credit assignment problem. Therefore, much effort is devoted to reducing the variance using special methods (Gu et al., 2016; Grathwohl et al., 2017), or to providing more dense rewards such as in Yu et al. (2017); Che et al. (2017); Fedus et al. (2018); de Masson d’Autume et al. (2019), where Monte-Carlo roll-outs are used to obtain per-word rewards.
2.3 Straight-Through Gradient Estimator
Another approach is using a straight-through gradient estimator (Bengio et al., 2013; Jang et al., 2016). The basic idea is to perform non-differentiable operation during the forward pass, but approximate it with a differentiable proxy during the backward pass. Consider sampling from a categorical distribution (notation borrowed from Jang et al. (2016)): The following operation is performed during the forward pass
z = one-hot(x̂) (4)
where a categorical sample x̂ is encoded as a v-dimensional one-hot vector z (v being the given vocabulary size). During the backward pass, the corresponding straight-through estimator of function f with respect to θ is
ĝθ = ( ∂f ∂z ) ⊤ ∂mθ ∂θ (5)
derived by using the approximation∂z∂θ ≈ ∂mθ ∂θ , where mθ is a differentiable proxy for the non-differentiable one-hot encoding vector z (Jang et al., 2016). Moreover, Jang et al. (2016) suggest the choice of
pθ = [pθ(x1), . . . , pθ(xv)]⊤ (6)
as proxy, where {x1, . . . , xv} are all possible tokens in vocabulary V . Compared to the former method, this approach updates all the possible tokens’ probability, even if the tokens were not sampled.
3 Proposed Method
The non-differentiable connection between the generator Gθ and the discriminator Dϕ is our primary focus in this paper (Figure 1). Starting from the straight through gradient estimator, we derive a re-centered estimator which helps the generator learn to select better tokens.
3.1 Gradient Estimation in the Generator
Consider a discriminator whose first layer is an embedding lookup layer. During forward propagation, the operation below is performed first:
ex̂ = E.lookup(x̂) = Ez (7)
where E = [e1, . . . , ev] and z is one-hot encoding of vocabulary. As described in Section 2.3, this is a non-differentiable categorical sampling operation with a straight-through estimator
of the gradient of VG with respect to θ, formulated as follows:
ĝθ = T∑ t=1 ( ∂VG ∂zt )⊤ ∂pt,θ ∂θ = T∑ t=1 (E⊤ ∂VG ∂e (ex̂t)) ⊤ ∂pt,θ ∂θ
(8)
= T∑ t=1 ∑ x∈V ⟨ex, ∂VG ∂e (ex̂t)⟩ ∂p(x|x̂1:t−1) ∂θ
(9)
where x̂t is the selected token at time-step t, T is the maximum time-step and V is the given vocabulary. The first term of Equation 8 can be derived from Equation 7 by applying the chain rule.
3.2 Re-centered Estimator
We turn our attention to the first term of Equation 9:
⟨ex, ∂VG ∂e (ex̂t)⟩ (10)
Denoting it as δx̂t→x[VG], Equation 9 becomes
ĝθ = T∑ t=1 ∑ x∈V δx̂t→x[VG] ∂pθ(x|x̂1:t−1) ∂θ . (11)
The coefficient δx̂t→x[VG] can be considered as an affinity factor indicating how much the discriminator is ‘attracted’ to the token x. Recall that the gradient ∂VG∂e (ex̂) has the direction of greatest increase of the objective VG at ex̂. Since this direction is relative to ex̂, we suggest taking the inner product with the vector ex − ex̂ which has been re-centered relative to ex̂ (Figure 2).
⟨ex − ex̂t , ∂VG ∂e (ex̂t)⟩ (12)
This inner product approximates the increase of VG when x̂t is replaced by x. Because this approximation only holds in the neighborhood of ex̂t , |ex − ex̂t | ≪ 1, we also add a kernel to dampen the affinity factor for ex that are far away. That is,
δRCx̂t→x[VG] = K(ex, ex̂t)⟨ex − ex̂t , ∂VG ∂e (ex̂t)⟩ (13)
where
K(u,v) = 1√ 1 + |u − v|2 . (14)
See Appendix B for more details. As a result, Equation 9 becomes
ĝRCθ = T∑ t=1 ∑ x∈V δRCx̂t→x[VG] ∂p(x|x̂1:t−1) ∂θ . (15)
In score-function based approaches, the generator is only given feedback on the tokens it samples, so it requires a large amount of trial-and-error in order to figure out how to write realistic sentences. To use an analogy, imagine a captain at sea (embedding space) searching for treasure. Each iteration of the game, the captain chooses random locations in the space and receives feedback regarding the choices. Score-function based approaches tell us how good the location is but say nothing about how to improve it. In contrast, our method, gives the captain a navigational direction to the next best location. Even if the direction is biased, it should allow the captain to find the treasures with fewer iterations.
4 Evaluation Metrics
We evaluate our model’s ability to generate realistic and diverse texts with BLEU-based metric, Fréchet Embedding Distance and language model based metric.
4.1 BLEU and Self-BLEU
BLEU (Papineni et al., 2002) and Self-BLEU (Zhu et al., 2018) capture text quality and diversity respectively. BLEU, a modified form of n-gram precision, measures local consistency between a set of reference and a set of candidate texts. Self-BLEU is a version of BLEU in which both the reference and candidate texts are drawn from the same corpus. A high Self-BLEU means that members within the corpus are too highly correlated, indicating a lack of diversity.
4.2 Fréchet Embedding Distance
Fréchet Embedding Distance (FED) (de Masson d’Autume et al., 2019) measures sentimental similarity, global consistency and diversity between reference and candidate texts. Semeniuta et al. (2018) claims that Fréchet Distance (FD) is not sensitive to the choice of embedding model. However, we notice significant discrepancies between FD scores calculated using different embedding models. The details are discussed in Appendix G.
4.3 Language Model Scoring Methods
Following Caccia et al. (2018); Zhao et al. (2017), we also evaluate the quality and the diversity of generated samples by Language Model Perplexity (LM) and Reverse Language Model Perplexity (RLM). Language Model Score (LM). We use a language model trained on real data to estimate the negative log likelihood per word of generated sentences. Samples which the language model considers improper, such as ungrammatical or nonsense sentences, have high perplexity. Reverse Language Model Score (RLM). If we instead train a language model on generated data while evaluating on real data, we can judge the diversity of the generated samples. Low-diversity generated data leads to an overfitted model which generalizes poorly on real data, as indicated by a high perplexity.
5 Results & Discussion
In this section, we use two datasets to evaluate CaptainGAN: COCO Image Captions1 and EMNLP 2017 News 2. We discuss our results from different perspectives via the evaluation metrics presented in the last section. Furthermore, an ablation study is performed in order to know the contribution of each technique. More details about our experiment setup, datasets, architecture and training techniques such as temperature control are described in Appendix D.
5.1 Quality and Diversity
On metrics of local text consistency and diversity, CaptainGAN significantly outperforms prior GAN-based models which rely heavily on pre-training, with the exception of RelGAN. The BLEU/Self-BLEU temperature sweeps shown in Figure 3a for CaptainGAN indicate that CaptainGAN approaches the performance of a language model. On the other hand, in Figure 3b, the language model scores show similar results for CaptainGAN and MLE model. Results for COCO are shown in Appendix E.
5.2 Temperature Sensitivity
CaptainGAN is robust to temperature variations. This is demonstrated in Figure 4a, which depicts how the softmax temperature influences the FED score. Regardless of model, changes in softmax temperature are certain to affect the FED, however, CaptainGAN shows the least variation.
5.3 Global Consistency
CaptainGAN improves global consistency comparing to prior works and language model on LM perplexity and FED score, respectively. As shown in Table 1, our method significantly reduces the gap of perplexity between GAN-based method and language model, which is directly trained to minimize the perplexity. Besides, we show that CaptainGAN outperforms MLE trained model on FED score as shown in Table 1.
1The preprocessed COCO dataset is available at https://github.com/pclucas14/ GansFallingShort/tree/master/real_data_experiments/data/coco
2The preprocessed EMNLP dataset is available at https://github.com/pclucas14/ GansFallingShort/tree/master/real_data_experiments/data/news
Model Train FED Val. FED LM Score
5.4 Matching N-grams of Training Data
Text generating models are often criticized for simply memorizing texts seen during training instead of learning how to model the true data distribution. We show the longest matching ngrams between text from the training data and samples in Figure 4b. CaptainGAN generates more samples with small matching n-grams (n < 5) comparing to MLE model, implying that our model can generate novel sentences without resorting to memorizing phrases from the training data.
5.5 CaptainGan As A Language Model
CaptainGAN’s strong performance on the evaluation metrics described earlier shows that it is capable of generating realistic text. Since the generator in our case is a RNN which can produce conditional probabilities at each timestep, this raises the question of whether or not an explicit language model is being learned. We feed real data to our generator and calculate the perplexity to measure the generator’s suitability as a language model. Surprisingly, as shown in Table 2, the perplexity of the generator on both training and validation data is unusually high. A similar phenomenon was noted in de Masson d’Autume et al. (2019), though not as extreme. We explain this phenomenon as follows: MLE models are trained specifically for low perplexity, since they minimize KL divergence. However, CaptainGAN minimizes alternative loss (Equation 2), which is similar to minimizing reverse KL divergence (Arjovsky & Bottou,
2017). This objective assigns an extremely low cost to mode dropping and does not force the generator to mimic all aspects of real data. This can result in poor modeling of the likelihood, but does not necessarily lead to poor sample generation. (Theis et al., 2016; Hashimoto et al., 2019). Following 5.4, we conclude that our model’s ability to generate realistic text cannot be the result of simply plagiarizing training samples.
Model Train Perp. Val. Perp.
5.6 Comparison to score function-based methods
Our method is able to outperform score function-based approaches using small batch sizes. Score function-based approaches often rely on large batch sizes for variance reduction (de Masson d’Autume et al., 2019), but this does not appear to be necessary for our recentered estimator. This confirms the reasoning at the end of Section 3.2, and greatly reduces our memory requirements during training. In addition, since gradients are naturally calculated for every timestep, there is no need for Monte-Carlo estimation of rewards like in score function-based approaches. Because our generator receives a richer learning signal, we use a 5:1 ratio for discriminator-generator updating. This greatly lessens our computational requirements.
5.7 Ablation Study
To understand the impact of each component of the CaptainGAN, we conduct an ablation study. In Table 3, we show the influence of adding new features, namely spectral normalization, re-center estimator, pretrained embeddings and discriminator regularization decoupled weight decay (Loshchilov & Hutter, 2018) and one-sided label smoothing (Salimans et al., 2016), by scoring FED on validation data. We observe that:
• As shown in Zhou et al. (2019), an unrestricted objective leads to an uninformative gradient. Without spectral normalization, the Lipschitz constant of the discriminator is unbounded making it difficult to update the generator with a gradient-based approach.
• Rather unexpectedly, pretrained embeddings do not lead to any significant improvement, which is surprising given our method’s dependence on the discriminator’s embedding space. The embeddings which the discriminator learns from scratch are capable of giving the generator an effective learning signal.
• The application of the re-centered estimator is responsible for a 20% improvement in FED, confirming its effectiveness.
• Equation 14 is dependent on the norm of the embeddings, which explains the effectiveness of discriminator weight regularization at improving FED.
6 Conclusion and Future Work
In this work we have presented CaptainGAN, an effective gradient-based method for quickly training a text generating GAN without pretraining. Starting from the straight-through estimator, we derive a re-centered gradient estimator that improves the quality/diversity of generated texts as measured by various standard metrics. In future work, we plan to investigate our estimator in more theoretical detail and further decrease its bias.
Appendix A Notation
Symbol Meaning
V = {x1, . . . , xv} a predefined vocabulary x a token belongs to V x̂ a token sampled from V x a sequence of tokens belong to V x̂ a sequence of tokens sampled from V ⊤ a transpose operation
Appendix B The Importance of Kernel
To verify the necessity of the kernel term (Equation 14), we directly apply Equation 12 to Equation 11 as follows: ∑
x∈V ⟨ex − ex̂,
∂VG ∂e (ex̂)⟩ ∂pθ(x) ∂θ
= ∑ x∈V ⟨ex, ∂VG ∂e (ex̂)⟩ ∂pθ(x) ∂θ
− ∑ x∈V ⟨ex̂, ∂VG ∂e (ex̂)⟩ ∂pθ(x) ∂θ (16)
where x̂ is the selected token and x is a tokens in the vocabulary V.
Let f(x) = ⟨ex, ∂VG∂e (ex̂)⟩ and b = ⟨ex̂, ∂VG ∂e (ex̂)⟩. b does not depend on x and is constant in the summation. ∑ x∈V (f(x) + b) ∂pθ(x) ∂θ
= ∂ ∂θ [ ∑ x∈V (f(x) + b)pθ(x)]
= ∂ ∂θ [( ∑ x∈V f(x)pθ(x)) + b( ∑ x∈V pθ(x))]
= ∂ ∂θ [( ∑ x∈V f(x)pθ(x)) + b ∗ 1]
= ∂
∂θ ∑ x∈V f(x)pθ(x) (17)
Without the kernel, the update to θ will be the same as prior work and thus carries the same drawback.
Appendix C Model Structure
We use a pre-trained word embedding of 300 dimensions to initialize all embedding weights. The embeddings are pre-trained using the fastText library Bojanowski et al. (2016) on the corresponding training data of each task.
C.1 Generator
The purpose of the first dense layer is to project the GRU cell outputs into the embedding space. The second dense layer converts the projected embedding vectors to logits over the tokens and is initialized as the transpose of the embedding weights. At the first timestep, the input to the embedding layer is the start-of-sentence token. The vocabulary also contains an end-of-sentence token. If the generator outputs this token at any timestep, the sentence ends and its length is set to this timestep.
To perform consistent convolution on sentences of different lengths, we use a special masking mechanism on all layers. For all convolution and pooling layers, we mask out input features where the receptive field is out-of-bounds with respect to the unpadded input sentence.
Appendix D Experimental details
D.1 Setup
D.1.1 Datasets
We use two datasets COCO Image Caption dataset Chen et al. (2015) and EMNLP 2017 News dataset3. Results are reported on EMNLP 2017 News if not specified. For each datasets, 10k sentences are set aside as validation data. COCO Image Captions. Sentences are limited in length to 24 tokens. The vocabulary size is 4.6k tokens. Training data consists of 10k sentences4. EMNLP 2017 News. Sentences in this dataset are much longer and more complicated, with a maximum length of 50 tokens and a vocabulary of 5.7k words. Training data consists of 300k sentences. 5
D.1.2 Architecture and Techniques
Our architecture is described in Appendix C. Spectral normalization of the discriminator’s weights Miyato et al. (2018) is critical to our method as without it the gradients are too unstable and convergence becomes impossible (Section 5.7).
D.1.3 Temperature Control
Following Caccia et al. (2018), we adjust the softmax temperature parameter at sampling time to measure the trade-off between quality and diversity. Increasing the temperature lowers the differences between softmax probabilities, which leads to diverse but low-quality samples. On the other hand, reducing the temperature leads to high-quality yet low-diversity samples.
D.2 training details
The discriminator and generator updates are performed with a 5:1 ratio. We use Adam Kingma & Ba (2014) with learning rate = 5.0 · 10−5, β1 = 0.5, and β2 = 0.999. Decoupled weight decay regularization Loshchilov & Hutter (2018) is applied to all variables in the discriminator, and λ = 0.05 · learning rate. All our models are trained using an NVIDIA GTX 1080 Ti. To evaluate samples on BLEU-5, LM versus RLM, and FED, we use 10k validation sentences as reference texts for both COCO and EMNLP 2017 News. For Self-BLEU, we randomly select half of the 10k samples as reference texts and leave the remainder as target texts. A smoothing function is used on BLEU-based metrics. See Appendix D.4 for more details. The maximum likelihood model architecture in this paper follows the setting of de Masson d’Autume et al. (2019). We could not reproduce the exact results reported in de Masson d’Autume et al. (2019), which we believe stems from discrepancies between our trained language model and theirs.
D.3 One-sided label smoothing
One-sided label smoothing has been shown to reduce the vulnerability of neural networks to adversarial examples and it is strongly recommended for GAN training Salimans et al. (2016). Therefore, the first term of Equation 1 is reformulated as
Ex∼pdata [α logDϕ(x) + (1− α) log(1−Dϕ(x))] (18) where α = 0.9 in our experiments.
3http://www.statmt.org/wmt17/ 4The preprocessed COCO dataset is available at https://github.com/pclucas14/ GansFallingShort/tree/master/real_data_experiments/data/coco 5The preprocessed EMNLP dataset is available at https://github.com/pclucas14/ GansFallingShort/tree/master/real_data_experiments/data/news
D.4 BLEU smoothing
One of the issue with BLEU is that in the case that a higher order n-gram precision of a sentence is 0, then the BLEU score will be 0, resulting in severely underestimation. This is due to the fact that BLEU is calculated by the geometric mean of precision. To solve this, we applied the smoothing technique as follows:
m ′
n = ϵ if mn = 0 (19)
where mn is the original match count, m ′
n is the one used during BLEU calculation. We’ve picked the smoothing factor ϵ = 0.1 as proposed by Chen & Cherry (2014).
Appendix E COCO Results
See Figure 6.
Appendix F Sentence Length Consistency
The sentence length of samples which are generated by CaptainGAN is highly related to the training data, as shown in Figure 7. This suggests that CaptainGAN is able to fully learn the distribution of sentence length by gradient estimator without additional mechanism like positional encoding or dense reward.
Appendix G Details on Fréchet Embedding Distance
Calculation of the FED score depends on the Universal Sentence EncoderCer et al. (2018). However, the version of Universal Sentence Encoder6 used in de Masson d’Autume et al. (2019) is incompatible with TensorflowAbadi et al. (2015) version 1.7 or newer. Furthermore, it would be infeasible to limit the experimentation environment to such ancient version of Tensorflow. Therefore, as a work around, we decide to report our FED score, as shown in Table 6, with Universal Sentence Encoder Large7 instead, which is compatible with the current version of Tensorflow (1.14 as of Jul 2019). While there are many benefits to using the FED metric, it is not without drawback. A drawback that we’ve observed is that the FED score will be significantly underestimated
6https://tfhub.dev/google/universal-sentence-encoder/2 7https://tfhub.dev/google/universal-sentence-encoder-large/3
Model Train FED Val. FED
whenever we are using a small number of samples, as shown in Figure 8. Therefore, we’ve ensured that sufficient samples are used in our evaluation.
Appendix H Generated samples
Training samples of COCO and EMNLP 2017 News can be found in Table 7. Randomly picked samples from MLE model and CaptainGAN are available in Table 8 and Table 9.
COCO
- a person standing in a boat on the water . - three sheep are grazing on the city sidewalk . - kenya airways airplane wing , engine and cabin on the tarmac . - light poles in the snow with yellow traffic lights mounted on them . - we are looking at the floor between the toilet and the wall . - someone taking a photo of a small residential bathroom . - a cat sits on the seat of a bicycle and looks down at another cat on a snowy day . - large set of motorcycles all lined up down a street . EMNLP 2017 News - apple clearly doesn ’ t want to see the dollar value of its u . k . earnings fall by a similar amount . - we do look at the number of hours we produce , and measure that against the religious make - up of society . - the u . s . has struggled to find an effective ground force to take on isis in syria , where president obama has ruled out a u . s . ground combat role . - it is a happy and kind world that we live in on this show and that is where i hope we can live in real life . - while men and women may end up earning roughly the same amount in the same jobs , men are more likely to end up in higher - paying roles in the tech industry . - but while they were beaten by a better side , the tie did reveal what i think has been city ’ s biggest problem this season : they have lost the ability to score against good teams . - according to facebook ’ s policies , accounts can be suspended if law enforcement believe individuals are at risk of harm . - she has to take personal responsibility for this - when she was health secretary she was told by those who know best that her decision to cut student places would have a damaging impact .
Table 7: Training samples on EMNLP 2017 News and COCO dataset.
COCO
- the bathroom bowl with focus on the vanity . - a man racer in a car brushing a carved cake . - a lady riding a motorcycle and a street while sitting on a pedestal track . - a commercial man are standing in the bicycle next to the back of an orange computer . - an airplane the bus is headed as something in the sky . - people riding on a bike on a surfer surfing doing the ocean . - a plane propped up in front of a vehicle . - a man wearing a cat in front of a motorcycle . EMNLP 2017 News - a one - game event and a limited number this dropped by 38 per cent to find out what we could want , to be as much done as we finish as the games . - more than 30 per cent of calais camps at risk were taken between children and child refugees crossing the mediterranean to fall apart . - the uk government got on the deal when it came to other eu countries that allowed workplace checks for 26 . - so women i didn ’ t want my parents to work hard with me at the time of the task of making it presents . - black voters feel there may not be enough choice between former president george w . bush and i have to go along with problems and support the people that are in a swing state for president . - he sold out to its eighth list , but a stunning showing that what cost the ability to start the league failed ? - let ’ s catch him down his back phone one morning john f kennedy , where he is in a us meeting group with supporters of love not being successful in the fields . - then there ’ s a couple of chances that they won competition - - the result from what is of me going out there that i really will play for the foundation .
COCO
- a black motorcycle parked next to a river . - a bathroom with a mirror , a shower and a toilet . - two people sit on a building in front of a building . - a group of people walking around the corner of a building . - people are riding bikes on a city street . - there is a picture of a bathroom in a bathroom . - an empty city street with cars parked on the side of the road . - a man taking a picture of a white bathroom . EMNLP 2017 News - if i go to the party , it ’ s a disaster , and i ’ m going to keep it right . - i think i ’ ve had some great strength in my life and i want that to be here for me . - obama pointed to reporters claiming he was willing to discuss the situation , but the u . s . have been military concern by american troops , including a nuclear deal in march . - a 16 - year - old man was arrested after the death of a man shot and arrested him on suspicion on his behalf . - the problem is that they are hard to find out what they want to see where they are and what they can achieve their past two years . - now it ’ s important to me and i am thinking so hard to make it the best for a long time . - great britain is suggesting the u . s . - led coalition figures at the australian foreign investor have raised more than $ 250 , 000 in the construction of operations . - it is a simple way to believe that if it can happen , then i ’ m not the right choice .
Table 9: Randomly selected CaptainGAN samples on EMNLP 2017 News and COCO with temperature = 1.0. | 1. What is the novelty and significance of the proposed approach in training GANs on discrete sequences?
2. How does the reviewer assess the clarity and consistency of the notation used in the paper?
3. What are the strengths and weaknesses of the proposed method compared to prior works, particularly Kusner & Hernández-Lobato (2016)?
4. Is the gradient centering approach necessary for avoiding the drawbacks of score function-based approaches?
5. How does the reviewer evaluate the effectiveness of the proposed approach in improving BLEU and Self-BLEU scores, Fréchet Embedding Distance, Language Model Score, and Reverse Language Model Score? | Review | Review
The submission proposes to train a GAN on discrete sequences using the straight-through Gumbel estimator introduced in Jang et al. (2016) in combination with gradient centering. The proposed approach is evaluated on COCO and EMNLP News in terms of BLEU and Self-BLEU scores, Fréchet Embedding Distance, Language Model Score, and Reverse Language Model Score.
My assessment is that the submission is below the acceptance bar, mainly due to clarity and novelty concerns. The proposed approach does have empirical backing, but I would argue that it is a very straightforward application of the straight-through Gumbel estimator to GANs, which is itself similar to existing work on applying the Gumbel-softmax estimator to GANs (Kusner & Hernández-Lobato, 2016). Detailed comments can be found below.
The submission does not feel self-contained. For instance, it borrows notation from Jang et al. (2016) without explicitly acknowledging it, and my personal experience is that reading Jang et al. (2016) beforehand makes a big difference in terms of clarity in Section 2.2.
The notation is inconsistent and confusing, and gets in the way of understanding the proposed approach. Here’s a (non-exhaustive) list of examples:
- The reward function is first introduced as f_\phi(\mathbf{x}) above Equation 3, but all subsequent mentions of the reward function use f_\phi(\hat{\mathbf{x}}).
- The \mathbf{m}_\theta variable is introduced in Equation 5 and is immediately replaced with \mathbf{p}_\theta, which adds notational overhead without any benefit.
- The difference between \hat{\mathbf{x}} and \hat{x} is not explained in the text. From the context I understand that \hat{x} is a categorical scalar in {1, …, V}; is this correct?
- In Equation 6, x_1, …, x_V are used to denote the *values* that \hat{x} can take. This clashes with the previous convention that \mathbf{x} is a sequence sampled from p_{data} (Equation 1). Given that convention and the difference between bolded and non-bolded variables discussed above, I would have expected that x_1, …, x_V would correspond to the categorical values of elements of the \mathbf{x} sequence. That contributes to confusion in Equation 9, where \mathbf{e}_{x_t} and p_\theta(x_t) are *not* time-dependent.
- Equation 8 sums over time steps, but the first summation that appears in Equation 8 does not make use of the temporal index. There is also a symbol collision for T, which is used both as the sequence length and as the "transpose" symbol.
As a result, the proposed centering method and the rationale for it is still not entirely clear to me. In particular, is the gradient centering approach necessary to avoid the drawback of score function-based approaches (i.e. the generator is only given feedback on the tokens it samples), or does the non-centered, straight-through variant of the proposed approach also avoid this drawback?
I’m also not convinced that the centering heuristic is a crucial component of the proposed approach when the biggest improvement observed over the straight-through baseline is obtained by adding spectral normalization. I would argue that the proposed approach is a straightforward application of the straight-through Gumbel gradient estimator to GAN training, which is similar in spirit to work by Kusner & Hernández-Lobato (2016) (not cited in the submission) -- the main difference being that the latter uses the Gumbel-softmax distribution directly and anneals the temperature parameter over the course of training. A comparison between the two would be warranted.
References:
- Kusner, M. J., & Hernández-Lobato, J. M. (2016). GANs for sequences of discrete elements with the Gumbel-softmax distribution. arXiv:1611.04051. |
ICLR | Title
Few-shot graph link prediction with domain adaptation
Abstract
Real-world link prediction problems often deal with data from multiple domains, where data may be highly skewed and imbalanced. Similar problems in computer vision are often referred to as Few-Shot Learning (FSL) problems. However, for graph link prediction, this problem has rarely been addressed and explored. In this work, we propose an adversarial training-based framework that aims at improving link prediction for highly skewed and imbalanced graphs. We introduce a domain discriminator on pairs of graph-level embedding. We then use the discriminator to improve the model in an adversarial way, such that the graph embeddings generated by the model are domain agnostic. We test our proposal on three benchmark datasets. Our results demonstrate that when domain differences exist, our method creates better graph embeddings that are more evenly distributed across domains and generate better prediction outcomes.
1 INTRODUCTION
Real-world link prediction problems often deal with graphs with nodes from various domains. For example, we can separate the items from a product purchase network into multiple domains such as books, electronics, luxury items. A financial transaction network can have data coming from various countries. Different domains may have different distributions, but more importantly, domains are often not balanced in most real-world datasets. For example, we may expect to see hundreds of thousands of nodes for books but only a few hundred for luxury items. There might also be billions of financial transactions in developed countries but much fewer in developing countries. We observe a similar phenomenon even in academic graph datasets. For example, in the ogbn-arxiv (Hu et al. (2020)) dataset, which includes pre-prints from 40 computer science subjects, papers from the most popular subject take up to 16.13% of all the nodes. In contrast, the least represented subject only has 0.02% of the data.
Training graph models with imbalanced data without precaution significantly downgrade model performances, especially for domains with fewer samples. The training process is overwhelmed by the popular domains that have abundant data. These popular domains act as noise, hampering the model to fit for small domains effectively. Despite the prevalence of this problem, current research on this topic is not adequate (Zhao et al. (2021); Shi et al. (2020)).
To improve model performance for small domains, we propose to train domain agnostic embedding that we can use for downstream tasks. The intuition is that if the latent representation learned by the graph model does not contain any domain-specific information, and therefore is domain-agnostic, it becomes more robust to domains with fewer data and even novel domains. We propose to use adversarial learning and force the graph model to learn domain agnostic embeddings.
Our idea of building domain-agnostic features across domains aligns with the few-shot learning (FSL) problem in computer vision (CV). FSL (Fei-Fei et al. (2006); Fink (2005)) refers to the type of ML algorithm that works with limited labeled data. A common FSL technique is to learn domainagnostic features first (with/without external data) and then let the model converge fast on to domains with limited shots (Long & Wang (2015)). It is relatively easier for CV tasks because stacked convolutional neural networks used in CV usually learn features in a general-to-specific sequence. In contrast, as stacking graph neural networks (GNN) layers only explore larger neighborhoods, the same method couldn’t be applied directly to graph data.
In addition, the CV literature has explored the idea of using adversarial learning to build better domain agnostic features. Domain-Adversarial Neural Network (DANN) (Ganin et al. (2016)) first introduces the idea of using a discriminator to align and separate semantic probability distributions. Further research (Motiian et al. (2017)) shows that it helps reduce the number of samples needed for each domain/class. Essentially, our work follows this path but further improves it with the enclosing subgraph idea to achieve better performances for graph link predictions.
We find that the method of extracting enclosing subgraphs, proposed in the SEAL model Zhang & Chen (2018) suits well for our setting because it not only boosts the number of trainable samples but also gives them different topological features. It is similar to what “cropping” does for images. CV Research has shown that data augmentation method, such as cropping (Qi et al. (2018); Zhang et al. (2018b)), introduces invariances for models to capture and is effective for FSL problems.
We summarize our main contributions as follows:
• We propose the idea of learning domain-agnostic graph embedding to improve the quality of link prediction by using adversarial learning and enclosing subgraphs.
• We define the concepts of “shots” for graph data, making it significantly easier to train, describe, and analyze graphs with multiple imbalanced domains.
• We use t-SNE plots to demonstrate the effectiveness of our method. It also allows practitioners to decide the expected gain from our proposed ideas.
2 RELATED WORKS
2.1 LINK PREDICTION
Link prediction focuses on predicting future or missing links between a pair of nodes. The current state-of-the-arts link prediction method is SEAL (Zhang & Chen (2018)). The core idea of SEAL is to extract enclosing subgraphs for a set of sampled positive and negative links. To be more specific, a positive link is a pair of nodes connected with an edge, and a negative link is a pair of nodes that are not connected. An enclosing subgraph is the union set of h-hop neighboring nodes of the pair of nodes. This data-preprocessing step lets SEAL transform the link prediction problem to a (sub)graph classification problem. It uses graph neural networks (GNNs) to generate the graph embedding, which feeds into a classifier and generates new predictions. Other than SEAL, Graph Auto-Encoder (GAE, Kipf & Welling (2016)) is an alternative approach. GAE works by generating node embedding for all nodes and then model the link likelihood based on the concatenated node embedding vectors for any node pairs. However, recent researches have shown this method heavily relies on the assumption that similar nodes have similar connections. In addition to these GNN-based methods, traditional heuristic methods, such as common neighbor (CN) and Adamic-Adar (AA), provide a completely different alternative. Given a pair of nodes, these non-parametric methods describe the topological structure of the surrounding neighborhood with a score. The advantage is that these methods could be used directly without training, but it’s challenging to utilize node/edge features with these methods.
2.2 FEW-SHOT LEARNING AND DOMAIN ADAPTATION
Few-shot learning is a special type of machine learning problem, which aims at obtaining good learning performances when supervised evidence information is limited (Wang et al. (2020)). Common FSL methods include data augmentation (cropping, rotating, generative models, etc) and metalearning (Sun et al. (2019)). Domain Adaptation is a type of Transfer Learning (TL) technique. It focuses on transferring learned knowledge from one domain to another while maintaining good performances. The core idea of domain adaptation is to learn a domain invariant representation, which is usually done by maximizing mutual information (Hjelm et al. (2019)) or training in an adversarial network (Goodfellow et al. (2014)).
Some recent works focus on the problem of few-shot learning with domain adaptation. The work from Motiian et al. (2017) also uses adversarial networks as the solution, but it introduced a 4-class confusion matrix classifier as the discriminator. The work from Zhao et al. (2020) explicitly added source/target per-class separation before doing embedding feature learning with domain adaptation.
2.3 ADVERSARIAL LEARNING
To learn domain agnostic models, we need to ensure that the learned graph embedding does not contain domain-specific information. To achieve this, a commonly used technique is adversarial learning, in which two models are trained against each other and play a minimax game based on game theory. For example, in the popular Generative Adversarial Network (GAN) (Goodfellow et al. (2014)), adversarial learning helps generators build more realistic images by having a discriminator that identifies whether an image is synthetic or real. There have been prior works done on using adversarial training on graph-related tasks. The work from Wang et al. (2018) uses GAN to improve various graph-related tasks, such as link prediction and node property prediction, by generating “fake” graphs. The work from Lei et al. (2019) also uses GAN to solve the challenging temporal link prediction problem for weighted dynamic networks.
3 METHODOLOGY
3.1 PROBLEM DEFINITION
We define the few-shot link prediction problem with imbalanced domains as follows: in a given graph space S, we have access to graphs from m (m ∈ N,m ≥ 2) number of domains. Different domains may have variations in their marginal distributions. Among them domains,m−1 (k ∈ N+) domains have k number of small graphs, which we define as graph “shots”. The last domain only has 1 shot. Our objective is to learn a link prediction model that leverages the domain differences to improve the performance in the domain with 1 shot of data.
3.2 NOTATIONS
In terms of graph-related notations, each graph is denoted by G = (V,E, d), where V is the set of vertices, E is the set of edges and d is the type of domain, which could be a one-hot encoded vector with one extra digit representing any unseen domain. In the adjacency matrix A of G, Au,v = 1 if edge (u, v) ∈ E, and otherwise Au,v = 0. For any node u, we use N (u) to denote the set of neighboring nodes of u. In addition, we use superscript as a shorthand to denote the domain class. For example, for a graph from domain i, we would denote it as Gi.
3.3 DOMAINS
For our setting, we have a total of m domains and each domain has one or more graphs or subgraphs Dk = {Gk1 , ...}mk=1, where Dk denotes the kth domain and graph Gk1 is a realization of random variable Gk. The effective of our method compared to baseline is based on the assumption that the distribution of the random variable G is different across different domains p(Gk) 6= p(Gk′) if k′ 6= k.
3.4 LINK PREDICTION
Here is the general form of any link prediction models: given (u, v,G), where u and v are a pair of nodes in graph G, the likelihood of having a link between u and v, yu,v , is modeled by function f : (u, v,G) → yu,v . This function could be trained with a union set of positive and negative samples. Positive samples are sampled from existing edges while negative samples are built with node pairs with u and v′, v′ /∈ N (u). Based on the ideas from the SEAL framework, the first step is to extract an h-hop enclosing subgraph around edge (u, v). We denote this subgraph as Su,v and the process of generating the enclosing subgraph as s(u, v,G) = Su,v . Function f could be rewritten as f : Su,v → yu,v . In general, f is composed with two separated functions g and c and could be written as f = g ◦ c . Function g generates the graph embedding vector g : Su,v → hSu,v . Function c takes in the graph embedding and generates the prediction ˆyu,v .
Here we have the supervised loss function.
Lcls(g, c) = E[`(c(g(Su,v)), yu,v)]
= − 1 N
∑ yu,v · log(c(g(Su,v))) + (1− yu,v) · log(1− c(g(Su,v)))
, where E is the statistical expectation of any loss function `. In this case, ` is the standard binary cross-entropy loss.
The input of the discriminator D is a pair of graph embedding hi and hj generated by g. As stated above, here we use superscript to denote the domain information. The objective is to determine whether these two subgraphs are from the same domain. Since we know from which graph G these subgraphs are generated, and we know the domain information of each G, we can easily get the ground truth di,j . The loss function here could be binary cross-entropy as well. An alternative design is to let the discriminator predict the exact domain that the graph embedding is from. We select the current one because it is easier to apply it to novel domains.
Ldis(D) = E[`(D(g(Siu,v), g(S j u′,v′)), d i,j)]
Given the two loss functions above, we could get the total loss for the input of a pair of subgraphs from domain i and j. Since our objective is to make sure the graph embedding does not contain domain-specific information, we would like to minimize the supervised training loss while maximizing the discriminator’s loss.
Ltotal(g, c) = E[ ∑m i=1 `(c(g(S i u,v)), y i u,v)− α · ∑m i=1 ∑m j=1 `(D(g(S i u,v), g(S j u′,v′)), d i,j)],
where α is hyperparameter to adjust the weight of the discrimination term compared to the supervised loss.
4 EXPERIMENTS
4.1 DATASETS
In this study, we compared the model performances using three different datasets, including ogbnproducts, ogbg-ppa and the protein-protein interaction (PPI) dataset.
Both the ogbn-products and ogbg-ppa dataset comes from the Stanford Open Graph Benchmark (Hu et al. (2020)). There is a specific category of benchmark dataset on OGB for link prediction. However, we could not use those link-prediction benchmark datasets as they do not contain domain information. Therefore, we select ogbn-products and ogbg-ppa from the node/graph property prediction task and convert them into link prediction tasks. The protein-protein interaction (PPI) dataset comes from the GNN benchmark dataset (Dwivedi et al. (2020)) and originally published together with the GraphSage paper(Hamilton et al. (2018)). Its original purpose is also node property prediction and we re-purpose it as a link-prediction problem.
ogbn-products The ogbn-products dataset is a single graph with 2,449,029 nodes and 61,859,140 edges, representing the Amazon product co-purchasing network in 47 categories. Node feature is a 100-dimension word embedding vector extracted from product description. During the data preprocessing, we select 23 domains with more than 10,000 nodes and split the nodes into 23 subgraphs, each of which represents a single domain. Then, within each domain, we randomly selected 40% of the nodes as the test data and used the remaining 60% to generate training “shots”. To ensure the robustness of our results, we run the random sampling 10 times to generate 10 sets of independent experiment data. In each experiment, within each domain, we start with a random node, run Breadth-first search (BFS) until the cumulated amount of nodes exceed 5% of all the nodes in the training graph (in other words, 3% in the original domain subgraph). All the nodes visited by BFS are considered as a “shot”. Then we start from another random node from the rest of the graph, repeat the BFS process until a total of 10 shots are generated for each domain. To build the few-shot learning with an imbalanced domain setting, for 22 out of 23 domains, we arrange all 10 shots to the training set and for the last domain, we arrange 1 shot to the training set. The training set is then randomized and fed into the model. The evaluation is done for the imbalanced domain only.
ogbg-ppa The ogbg-ppa dataset include 158,100 graphs representing the protein-protein association network in 1581 different species that cover 37 broad taxonomic groups. The average number of nodes is 243.4 and the average number of edges is 2,266.1. Edges in this dataset are associated with a 7-dimensional feature vector . We chose to pick broad taxonomic groups as our criteria to separate domains. We select 50 graphs from each domain as our test set. For our training set, we selected 10 graphs from each domain to build the data for one experiment and we created 10 independent experiments to keep our result robust. Similar with what we do for ogbn-products, we put 1-shot of data for the evaluation domain into the training set and 10-shot for all the other domains. Note that the graphs in ogbg-ppa do not have node feature data. Instead, only edge feature data is available.
PPI The PPI dataset is another protein-related dataset. It includes 24 graphs representing the human protein-protein interaction network in 24 types of human tissues. The average number of nodes is 2372.7 and the average number of edges is 68508.7. Each node has 50 features. We follow the same procedure as we use for the ogbn-products dataset to split test and train, generate shots and produce experiment data.
4.2 BASELINE METHODS
We compared the performances of our proposed methods with several commonly used linkprediction methods, including SEAL, Graph AutoEncoder (GAE) and heuristic methods including Common Neighbor (CN), Adamic-Adar (AA) and Personalized PageRank (PPR).
SEAL SEAL is the current state-of-the-art method for link prediction and provides a lot of inspiration to this work. The core idea is to create enclosing subgraphs around a selection of positive and negative node pairs. The distance between the target pair of nodes and all the other nodes is calculated within each subgraph. Then, SEAL uses a GNN to compute graph-level embedding for each subgraph and use the graph embedding to predict the probability of having a link between each node pair.
GAE GAE employs a standard GCN to generate updated node embedding. For each positive and negative node pair, the node embedding vectors for the pair of nodes are concatenated and then used to predict the probability of having a link.
CN As a heuristic method, there is no training process needed for CN. The common neighbor method computes the total number of common neighbors between any node pairs and uses the score to predict the probability of having a link. The mathematical formula for CN is usually denoted by
CN = |N (x) ⋂ N (y)|
AA Adamic-Adar is another heuristic method. Unlike CN, instead of looking into the first-order neighbor of a node pair, AA pays attention to the second-order neighbor of the node pair. Its mathematical formula could be denoted by
AA = ∑ z∈N (x) ⋂ N (y) 1 log(|N (z)|)
PPR Personalize PageRank computes the stationary distribution of a random walker starting at x and ending at y with restarts. Here since we are focusing on bidirectional graphs, we need to compute the scores for both from x to y and from y to x. The mathematical formula is denoted by the following equation, where q stands for the stationary distribution.
PPR = qxy + qyx
4.3 EXPERIMENT CONFIGURATION
All the heuristic methods (CN, AA & PPR) can be used directly without training. To ensure the evaluation is fair, we tested all six methods on the same set of node pairs for evaluation.
All three NN-based methods (our method, SEAL & GAE) requires training on a selection of positive and negative node pairs. To ensure a fair comparison, we trained all these three methods on the same sets of node pairs. As mentioned before, we generated 10 sets of experiment data for training. The final results for these NN-based methods are average performances in all 10 experiments. In addition, to reduce the size of training data, we sample a proportion of positive node pairs and then sample the same amount of negative node pairs for each dataset. The exact proportions for each dataset is consistent across all three NN-based methods and is in-fact determined based on the number of edges and the size of features.
In all three GNN-based methods, we use a three layers GCN with 32 neurons on each layer. Classic GCN models only take node features, and it is what we use for ogbn-products and PPI. However, as mentioned earlier, the ogbg-ppa dataset only has edge features. To utilize edge features, we modified the message passing rule such that the message includes the concatenated node feature, which in this case is the calculated distance value between each node and the target node pair, and edge features. In our method and SEAL, where a graph level pooling is needed, we use the pooling method proposed in DGCNN (Zhang et al. (2018a)), which composes of a sort pooling layer followed by a two-layer convolution network. In this experiment, we use the sort pooling layer to pull out the sorted features from the top 10 nodes (sorted by the largest value of each node).
For our method and SEAL, the prediction of link probability starts with a 32-length long graph embedding vector. After the pooling layer, we send the graph embedding to a dense layer with 32 neurons and another dense layer to generate the output. For GAE, the prediction of link probability requires the node level features of the two target nodes. In our implementation, after the GNN step, we simply look up the node embedding vectors for the two target nodes and concatenate the two vectors. The merged embedding is then sent to the same MLP as described earlier to generate the prediction.
In our methods, we use a three layer MLP with 32 neurons as the discriminator for the graph embedding. The input of the discriminator is the concatenation of two random graph embedding and the target of the discriminator is to predict whether these two embedding come from the same domain. In the actual implementation, because we train the model using a mini-batch of 32, this could be easily done by separating one batch of 32 embedding vectors to two groups of 16 vectors and comparing between these two groups.
In addition, we implement a 20 iteration warm-up stage for both our method and SEAL. In the warmup stage, the models are trained without discriminator. The purpose of setting up the warm-up stage is to make it easier to train the adversarial model. As reported in literature, adversarial models are harder to train. If we train the model and the discriminator together from the very beginning, a bad initialization on the model may make it too difficult for the model to confuse the discriminator, which will lead to the “model collapsing”. A 20 step warm-up stage may not always bring the model to the best spot but it offers a good starting point in most cases.
All the models are trained with a learning rate at 0.0001 with L2 regularization at 0.001 for 250 epochs. The discriminators are trained with a learning rate at 0.00001. All the experiments are trained and tested on a NVIDIA V100 card.
4.4 EXPERIMENT RESULTS
In our proposed method, α is an important hyperparameter because it controls the impact of the discriminator on the model. In this experiment, we select α from 1, 3, and 5 and we compare the results with all of our baseline methods. The results are displayed in Table 1 and Figure 2.
First, we observe that compared with the vanilla SEAL method, in most cases, the proposed adversarial methods yield better performances over underrepresented domains. The average improvement is around 1.5% but the levels of improvement vary across different datasets. In the ogbn-products and PPI dataset, we observe much better performance from the adversarial models but in the ogbg-ppa dataset, the improvement is not very significant. We think the reason is that the feature distributions in the ogbg-ppa dataset are highly similar across domains. In this case, the impact of the lack of training data for one specific domain is very small. Therefore, the discriminator here cannot generate any benefits. We will discuss the details of our reasoning later.
The GAE method did not do well under this experiment. Unlike SEAL, GAE lacks the key node feature to describe the distance between each node to the target link. Under this few-shot learning setting, the limited number of nodes for training further restricts GAE’s ability to efficiently train the model.
The performances of the heuristic methods are low as well. One possible explanation is that the link prediction task in these three experiment dataset heavily depends on node/edge features. Since all
three heuristic methods only use graph topological structure data, they lack enough information to make good predictions.
Impact of Adversarial Training on Graph Embedding
To understand the real impact of the adversarial training on graph embedding, we use t-SNE (van der Maaten & Hinton (2008)) to reduce the dimension of graph embedding and visualize them in 2-D space. Figure 3 shows the changes of graph embedding for all 24 domains in PPI. From the figure, we can see the dots generated by the adversarial method are more evenly distributed across the 2-D space and this observation further confirms that our proposed method is working as expected.
In addition, by inspecting the t-SNE plot (Figure 4) for the ogbg-ppa dataset, we find that with the baseline SEAL method, the distribution of embedding points among different domains in ogbg-ppa are already evenly distributed. It shows that there are few inter-domain differences in this dataset. Although the protein graphs come from different organisms, it could be the case that all the graphs are assembled in a similar way, so they are topologically similar. It could also be possible that the 7-dimension edge feature extracted for this dataset are not reflecting domain differences.
In fact, comparing Figure 3 and Figure 4 shows that the t-SNE visualization of the graph embedding vector is a good tool for machine learning practitioners to decide when to adopt the proposed
approach. If there are distinguishable domain differences across domains based on the t-SNE plot, the proposed method will likely yield some improvements.
Training Curves, Run time and Impact of α
We also look into the impact of the adversarial design on the training curves. Figure 5 shows the smoothed training curves of 10 experiments with the ogbn-products dataset on domain 2. The shaded area represent the standard error. Note that we excluded the performances during the 20-step warmup stage to zoom on the most important area. As presented in Figure 5, the baseline performance quickly drops after 50 iterations even with the use of L2 regularization. This is mostly due to the fact that the evaluation domain is underrepresented in training data (recall that in the training data, we have 1 shot of data in the evaluation domain and 10 shots of data for every other domain). By enforcing the graph embedding from different domains to be similar using adversarial training, the graph embedding generated by our method is more likely to work well with the evaluation domain. Therefore we see the performances from all three adversarial experiments reach better best performances compared with the baseline SEAL method.
In addition, by increasing the value of α, we increase the impact of the discriminator of the target model. In this case, α = 3 seems to provide the best performance. However, if we look at the average best performances, the best α is in fact 5. A closer look into each individual training curve tells us that some of the best models were trained when α = 5. However, the large α value also makes the model less stable and it’s more likely to see the model collapse during the training process. Therefore, we conclude that when domain differences exist, higher impact of the discriminator helps the model reach better performances but it makes it more difficult to train. Eventually, there is a sweet spot to keep the balance between performance and model stability.
Figure 5 also tells us that compared with the baseline SEAL method, our method takes more steps to converge. Increasing the value of α increases the steps to converge as well. At the same time, it also takes more time to train the adversarial model because the computation of graph embedding needs to happen two times to collect gradients to train the target model and the discriminator. On average, it takes 1.134s to finish one training step for the ogbn-products dataset using baseline SEAL on an NVIDIA V100 graphic card, but it takes 1.406s (24.0 % increase) to train the network using adversarial method.
5 CONCLUSION
In this work, we propose an adversarial training-based approach to solve the few-shot graph link prediction problem. Compared with the state-of-arts method and other baselines, our method achieves better performance on underrepresented domains, especially when domain variance is not trivial. We further examine the changes of the graph embeddings using t-SNE and reveal the root of this improvement. At the same time, by showing the t-SNE visualizations for cases when the proposed method doesn’t provide benefits, we show that t-SNE is a very efficient tool for practitioners to determine benefit of our method. | 1. What is the focus and contribution of the paper regarding few-shot link prediction?
2. What are the strengths of the proposed approach, particularly in terms of its performance and visual explanations?
3. What are the weaknesses of the paper, especially regarding its novelty and lack of discussion on prior works?
4. Do you have any concerns or questions about the proposed method's setting and hyperparameter choices? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposed adversarial training for few-shot link prediction problems. The authors introduce a domain discriminator on pairs of graph-level embedding and address the issue of few-shot learning in graph data. The proposed method is tested on 3 benchmark datasets and achieves good performance compared to the prior approaches.
Review
Strength
The paper first proposed a few-shot link problem with imbalanced domains.
The proposed method achieves good performance compared to baselines and the authors use t-SNE plots as a rule of thumb for practitioners to decide whether to incorporate the proposed method.
Visual explanations of the source of improvements also validate the proposed approach.
Weakness
Use adversarial training in graph or link prediction is not novel. There is a lot of prior work which is not discussed in the paper.
[1] Lei, K., Qin, M., Bai, B., Zhang, G., and Yang, M., 2019, April. Gcn-gan: A non-linear temporal link prediction model for weighted dynamic networks. In IEEE INFOCOM 2019-IEEE Conference on Computer Communications (pp. 388-396). IEEE. [2] Wang, H., Wang, J., Wang, J., Zhao, M., Zhang, W., Zhang, F., Xie, X. and Guo, M., 2018, April. Graphgan: Graph representation learning with generative adversarial nets. In Proceedings of the AAAI conference on artificial intelligence (Vol. 32, No. 1).
The proposed few-shot link prediction setting is kind of arbitrary, what is the application and motivation behind this setting? why the last domain has to be 1 shot?
The model is a three layers GCN with 32 neurons. there are no ablations for the hyper-parameters for the model. |
ICLR | Title
Few-shot graph link prediction with domain adaptation
Abstract
Real-world link prediction problems often deal with data from multiple domains, where data may be highly skewed and imbalanced. Similar problems in computer vision are often referred to as Few-Shot Learning (FSL) problems. However, for graph link prediction, this problem has rarely been addressed and explored. In this work, we propose an adversarial training-based framework that aims at improving link prediction for highly skewed and imbalanced graphs. We introduce a domain discriminator on pairs of graph-level embedding. We then use the discriminator to improve the model in an adversarial way, such that the graph embeddings generated by the model are domain agnostic. We test our proposal on three benchmark datasets. Our results demonstrate that when domain differences exist, our method creates better graph embeddings that are more evenly distributed across domains and generate better prediction outcomes.
1 INTRODUCTION
Real-world link prediction problems often deal with graphs with nodes from various domains. For example, we can separate the items from a product purchase network into multiple domains such as books, electronics, luxury items. A financial transaction network can have data coming from various countries. Different domains may have different distributions, but more importantly, domains are often not balanced in most real-world datasets. For example, we may expect to see hundreds of thousands of nodes for books but only a few hundred for luxury items. There might also be billions of financial transactions in developed countries but much fewer in developing countries. We observe a similar phenomenon even in academic graph datasets. For example, in the ogbn-arxiv (Hu et al. (2020)) dataset, which includes pre-prints from 40 computer science subjects, papers from the most popular subject take up to 16.13% of all the nodes. In contrast, the least represented subject only has 0.02% of the data.
Training graph models with imbalanced data without precaution significantly downgrade model performances, especially for domains with fewer samples. The training process is overwhelmed by the popular domains that have abundant data. These popular domains act as noise, hampering the model to fit for small domains effectively. Despite the prevalence of this problem, current research on this topic is not adequate (Zhao et al. (2021); Shi et al. (2020)).
To improve model performance for small domains, we propose to train domain agnostic embedding that we can use for downstream tasks. The intuition is that if the latent representation learned by the graph model does not contain any domain-specific information, and therefore is domain-agnostic, it becomes more robust to domains with fewer data and even novel domains. We propose to use adversarial learning and force the graph model to learn domain agnostic embeddings.
Our idea of building domain-agnostic features across domains aligns with the few-shot learning (FSL) problem in computer vision (CV). FSL (Fei-Fei et al. (2006); Fink (2005)) refers to the type of ML algorithm that works with limited labeled data. A common FSL technique is to learn domainagnostic features first (with/without external data) and then let the model converge fast on to domains with limited shots (Long & Wang (2015)). It is relatively easier for CV tasks because stacked convolutional neural networks used in CV usually learn features in a general-to-specific sequence. In contrast, as stacking graph neural networks (GNN) layers only explore larger neighborhoods, the same method couldn’t be applied directly to graph data.
In addition, the CV literature has explored the idea of using adversarial learning to build better domain agnostic features. Domain-Adversarial Neural Network (DANN) (Ganin et al. (2016)) first introduces the idea of using a discriminator to align and separate semantic probability distributions. Further research (Motiian et al. (2017)) shows that it helps reduce the number of samples needed for each domain/class. Essentially, our work follows this path but further improves it with the enclosing subgraph idea to achieve better performances for graph link predictions.
We find that the method of extracting enclosing subgraphs, proposed in the SEAL model Zhang & Chen (2018) suits well for our setting because it not only boosts the number of trainable samples but also gives them different topological features. It is similar to what “cropping” does for images. CV Research has shown that data augmentation method, such as cropping (Qi et al. (2018); Zhang et al. (2018b)), introduces invariances for models to capture and is effective for FSL problems.
We summarize our main contributions as follows:
• We propose the idea of learning domain-agnostic graph embedding to improve the quality of link prediction by using adversarial learning and enclosing subgraphs.
• We define the concepts of “shots” for graph data, making it significantly easier to train, describe, and analyze graphs with multiple imbalanced domains.
• We use t-SNE plots to demonstrate the effectiveness of our method. It also allows practitioners to decide the expected gain from our proposed ideas.
2 RELATED WORKS
2.1 LINK PREDICTION
Link prediction focuses on predicting future or missing links between a pair of nodes. The current state-of-the-arts link prediction method is SEAL (Zhang & Chen (2018)). The core idea of SEAL is to extract enclosing subgraphs for a set of sampled positive and negative links. To be more specific, a positive link is a pair of nodes connected with an edge, and a negative link is a pair of nodes that are not connected. An enclosing subgraph is the union set of h-hop neighboring nodes of the pair of nodes. This data-preprocessing step lets SEAL transform the link prediction problem to a (sub)graph classification problem. It uses graph neural networks (GNNs) to generate the graph embedding, which feeds into a classifier and generates new predictions. Other than SEAL, Graph Auto-Encoder (GAE, Kipf & Welling (2016)) is an alternative approach. GAE works by generating node embedding for all nodes and then model the link likelihood based on the concatenated node embedding vectors for any node pairs. However, recent researches have shown this method heavily relies on the assumption that similar nodes have similar connections. In addition to these GNN-based methods, traditional heuristic methods, such as common neighbor (CN) and Adamic-Adar (AA), provide a completely different alternative. Given a pair of nodes, these non-parametric methods describe the topological structure of the surrounding neighborhood with a score. The advantage is that these methods could be used directly without training, but it’s challenging to utilize node/edge features with these methods.
2.2 FEW-SHOT LEARNING AND DOMAIN ADAPTATION
Few-shot learning is a special type of machine learning problem, which aims at obtaining good learning performances when supervised evidence information is limited (Wang et al. (2020)). Common FSL methods include data augmentation (cropping, rotating, generative models, etc) and metalearning (Sun et al. (2019)). Domain Adaptation is a type of Transfer Learning (TL) technique. It focuses on transferring learned knowledge from one domain to another while maintaining good performances. The core idea of domain adaptation is to learn a domain invariant representation, which is usually done by maximizing mutual information (Hjelm et al. (2019)) or training in an adversarial network (Goodfellow et al. (2014)).
Some recent works focus on the problem of few-shot learning with domain adaptation. The work from Motiian et al. (2017) also uses adversarial networks as the solution, but it introduced a 4-class confusion matrix classifier as the discriminator. The work from Zhao et al. (2020) explicitly added source/target per-class separation before doing embedding feature learning with domain adaptation.
2.3 ADVERSARIAL LEARNING
To learn domain agnostic models, we need to ensure that the learned graph embedding does not contain domain-specific information. To achieve this, a commonly used technique is adversarial learning, in which two models are trained against each other and play a minimax game based on game theory. For example, in the popular Generative Adversarial Network (GAN) (Goodfellow et al. (2014)), adversarial learning helps generators build more realistic images by having a discriminator that identifies whether an image is synthetic or real. There have been prior works done on using adversarial training on graph-related tasks. The work from Wang et al. (2018) uses GAN to improve various graph-related tasks, such as link prediction and node property prediction, by generating “fake” graphs. The work from Lei et al. (2019) also uses GAN to solve the challenging temporal link prediction problem for weighted dynamic networks.
3 METHODOLOGY
3.1 PROBLEM DEFINITION
We define the few-shot link prediction problem with imbalanced domains as follows: in a given graph space S, we have access to graphs from m (m ∈ N,m ≥ 2) number of domains. Different domains may have variations in their marginal distributions. Among them domains,m−1 (k ∈ N+) domains have k number of small graphs, which we define as graph “shots”. The last domain only has 1 shot. Our objective is to learn a link prediction model that leverages the domain differences to improve the performance in the domain with 1 shot of data.
3.2 NOTATIONS
In terms of graph-related notations, each graph is denoted by G = (V,E, d), where V is the set of vertices, E is the set of edges and d is the type of domain, which could be a one-hot encoded vector with one extra digit representing any unseen domain. In the adjacency matrix A of G, Au,v = 1 if edge (u, v) ∈ E, and otherwise Au,v = 0. For any node u, we use N (u) to denote the set of neighboring nodes of u. In addition, we use superscript as a shorthand to denote the domain class. For example, for a graph from domain i, we would denote it as Gi.
3.3 DOMAINS
For our setting, we have a total of m domains and each domain has one or more graphs or subgraphs Dk = {Gk1 , ...}mk=1, where Dk denotes the kth domain and graph Gk1 is a realization of random variable Gk. The effective of our method compared to baseline is based on the assumption that the distribution of the random variable G is different across different domains p(Gk) 6= p(Gk′) if k′ 6= k.
3.4 LINK PREDICTION
Here is the general form of any link prediction models: given (u, v,G), where u and v are a pair of nodes in graph G, the likelihood of having a link between u and v, yu,v , is modeled by function f : (u, v,G) → yu,v . This function could be trained with a union set of positive and negative samples. Positive samples are sampled from existing edges while negative samples are built with node pairs with u and v′, v′ /∈ N (u). Based on the ideas from the SEAL framework, the first step is to extract an h-hop enclosing subgraph around edge (u, v). We denote this subgraph as Su,v and the process of generating the enclosing subgraph as s(u, v,G) = Su,v . Function f could be rewritten as f : Su,v → yu,v . In general, f is composed with two separated functions g and c and could be written as f = g ◦ c . Function g generates the graph embedding vector g : Su,v → hSu,v . Function c takes in the graph embedding and generates the prediction ˆyu,v .
Here we have the supervised loss function.
Lcls(g, c) = E[`(c(g(Su,v)), yu,v)]
= − 1 N
∑ yu,v · log(c(g(Su,v))) + (1− yu,v) · log(1− c(g(Su,v)))
, where E is the statistical expectation of any loss function `. In this case, ` is the standard binary cross-entropy loss.
The input of the discriminator D is a pair of graph embedding hi and hj generated by g. As stated above, here we use superscript to denote the domain information. The objective is to determine whether these two subgraphs are from the same domain. Since we know from which graph G these subgraphs are generated, and we know the domain information of each G, we can easily get the ground truth di,j . The loss function here could be binary cross-entropy as well. An alternative design is to let the discriminator predict the exact domain that the graph embedding is from. We select the current one because it is easier to apply it to novel domains.
Ldis(D) = E[`(D(g(Siu,v), g(S j u′,v′)), d i,j)]
Given the two loss functions above, we could get the total loss for the input of a pair of subgraphs from domain i and j. Since our objective is to make sure the graph embedding does not contain domain-specific information, we would like to minimize the supervised training loss while maximizing the discriminator’s loss.
Ltotal(g, c) = E[ ∑m i=1 `(c(g(S i u,v)), y i u,v)− α · ∑m i=1 ∑m j=1 `(D(g(S i u,v), g(S j u′,v′)), d i,j)],
where α is hyperparameter to adjust the weight of the discrimination term compared to the supervised loss.
4 EXPERIMENTS
4.1 DATASETS
In this study, we compared the model performances using three different datasets, including ogbnproducts, ogbg-ppa and the protein-protein interaction (PPI) dataset.
Both the ogbn-products and ogbg-ppa dataset comes from the Stanford Open Graph Benchmark (Hu et al. (2020)). There is a specific category of benchmark dataset on OGB for link prediction. However, we could not use those link-prediction benchmark datasets as they do not contain domain information. Therefore, we select ogbn-products and ogbg-ppa from the node/graph property prediction task and convert them into link prediction tasks. The protein-protein interaction (PPI) dataset comes from the GNN benchmark dataset (Dwivedi et al. (2020)) and originally published together with the GraphSage paper(Hamilton et al. (2018)). Its original purpose is also node property prediction and we re-purpose it as a link-prediction problem.
ogbn-products The ogbn-products dataset is a single graph with 2,449,029 nodes and 61,859,140 edges, representing the Amazon product co-purchasing network in 47 categories. Node feature is a 100-dimension word embedding vector extracted from product description. During the data preprocessing, we select 23 domains with more than 10,000 nodes and split the nodes into 23 subgraphs, each of which represents a single domain. Then, within each domain, we randomly selected 40% of the nodes as the test data and used the remaining 60% to generate training “shots”. To ensure the robustness of our results, we run the random sampling 10 times to generate 10 sets of independent experiment data. In each experiment, within each domain, we start with a random node, run Breadth-first search (BFS) until the cumulated amount of nodes exceed 5% of all the nodes in the training graph (in other words, 3% in the original domain subgraph). All the nodes visited by BFS are considered as a “shot”. Then we start from another random node from the rest of the graph, repeat the BFS process until a total of 10 shots are generated for each domain. To build the few-shot learning with an imbalanced domain setting, for 22 out of 23 domains, we arrange all 10 shots to the training set and for the last domain, we arrange 1 shot to the training set. The training set is then randomized and fed into the model. The evaluation is done for the imbalanced domain only.
ogbg-ppa The ogbg-ppa dataset include 158,100 graphs representing the protein-protein association network in 1581 different species that cover 37 broad taxonomic groups. The average number of nodes is 243.4 and the average number of edges is 2,266.1. Edges in this dataset are associated with a 7-dimensional feature vector . We chose to pick broad taxonomic groups as our criteria to separate domains. We select 50 graphs from each domain as our test set. For our training set, we selected 10 graphs from each domain to build the data for one experiment and we created 10 independent experiments to keep our result robust. Similar with what we do for ogbn-products, we put 1-shot of data for the evaluation domain into the training set and 10-shot for all the other domains. Note that the graphs in ogbg-ppa do not have node feature data. Instead, only edge feature data is available.
PPI The PPI dataset is another protein-related dataset. It includes 24 graphs representing the human protein-protein interaction network in 24 types of human tissues. The average number of nodes is 2372.7 and the average number of edges is 68508.7. Each node has 50 features. We follow the same procedure as we use for the ogbn-products dataset to split test and train, generate shots and produce experiment data.
4.2 BASELINE METHODS
We compared the performances of our proposed methods with several commonly used linkprediction methods, including SEAL, Graph AutoEncoder (GAE) and heuristic methods including Common Neighbor (CN), Adamic-Adar (AA) and Personalized PageRank (PPR).
SEAL SEAL is the current state-of-the-art method for link prediction and provides a lot of inspiration to this work. The core idea is to create enclosing subgraphs around a selection of positive and negative node pairs. The distance between the target pair of nodes and all the other nodes is calculated within each subgraph. Then, SEAL uses a GNN to compute graph-level embedding for each subgraph and use the graph embedding to predict the probability of having a link between each node pair.
GAE GAE employs a standard GCN to generate updated node embedding. For each positive and negative node pair, the node embedding vectors for the pair of nodes are concatenated and then used to predict the probability of having a link.
CN As a heuristic method, there is no training process needed for CN. The common neighbor method computes the total number of common neighbors between any node pairs and uses the score to predict the probability of having a link. The mathematical formula for CN is usually denoted by
CN = |N (x) ⋂ N (y)|
AA Adamic-Adar is another heuristic method. Unlike CN, instead of looking into the first-order neighbor of a node pair, AA pays attention to the second-order neighbor of the node pair. Its mathematical formula could be denoted by
AA = ∑ z∈N (x) ⋂ N (y) 1 log(|N (z)|)
PPR Personalize PageRank computes the stationary distribution of a random walker starting at x and ending at y with restarts. Here since we are focusing on bidirectional graphs, we need to compute the scores for both from x to y and from y to x. The mathematical formula is denoted by the following equation, where q stands for the stationary distribution.
PPR = qxy + qyx
4.3 EXPERIMENT CONFIGURATION
All the heuristic methods (CN, AA & PPR) can be used directly without training. To ensure the evaluation is fair, we tested all six methods on the same set of node pairs for evaluation.
All three NN-based methods (our method, SEAL & GAE) requires training on a selection of positive and negative node pairs. To ensure a fair comparison, we trained all these three methods on the same sets of node pairs. As mentioned before, we generated 10 sets of experiment data for training. The final results for these NN-based methods are average performances in all 10 experiments. In addition, to reduce the size of training data, we sample a proportion of positive node pairs and then sample the same amount of negative node pairs for each dataset. The exact proportions for each dataset is consistent across all three NN-based methods and is in-fact determined based on the number of edges and the size of features.
In all three GNN-based methods, we use a three layers GCN with 32 neurons on each layer. Classic GCN models only take node features, and it is what we use for ogbn-products and PPI. However, as mentioned earlier, the ogbg-ppa dataset only has edge features. To utilize edge features, we modified the message passing rule such that the message includes the concatenated node feature, which in this case is the calculated distance value between each node and the target node pair, and edge features. In our method and SEAL, where a graph level pooling is needed, we use the pooling method proposed in DGCNN (Zhang et al. (2018a)), which composes of a sort pooling layer followed by a two-layer convolution network. In this experiment, we use the sort pooling layer to pull out the sorted features from the top 10 nodes (sorted by the largest value of each node).
For our method and SEAL, the prediction of link probability starts with a 32-length long graph embedding vector. After the pooling layer, we send the graph embedding to a dense layer with 32 neurons and another dense layer to generate the output. For GAE, the prediction of link probability requires the node level features of the two target nodes. In our implementation, after the GNN step, we simply look up the node embedding vectors for the two target nodes and concatenate the two vectors. The merged embedding is then sent to the same MLP as described earlier to generate the prediction.
In our methods, we use a three layer MLP with 32 neurons as the discriminator for the graph embedding. The input of the discriminator is the concatenation of two random graph embedding and the target of the discriminator is to predict whether these two embedding come from the same domain. In the actual implementation, because we train the model using a mini-batch of 32, this could be easily done by separating one batch of 32 embedding vectors to two groups of 16 vectors and comparing between these two groups.
In addition, we implement a 20 iteration warm-up stage for both our method and SEAL. In the warmup stage, the models are trained without discriminator. The purpose of setting up the warm-up stage is to make it easier to train the adversarial model. As reported in literature, adversarial models are harder to train. If we train the model and the discriminator together from the very beginning, a bad initialization on the model may make it too difficult for the model to confuse the discriminator, which will lead to the “model collapsing”. A 20 step warm-up stage may not always bring the model to the best spot but it offers a good starting point in most cases.
All the models are trained with a learning rate at 0.0001 with L2 regularization at 0.001 for 250 epochs. The discriminators are trained with a learning rate at 0.00001. All the experiments are trained and tested on a NVIDIA V100 card.
4.4 EXPERIMENT RESULTS
In our proposed method, α is an important hyperparameter because it controls the impact of the discriminator on the model. In this experiment, we select α from 1, 3, and 5 and we compare the results with all of our baseline methods. The results are displayed in Table 1 and Figure 2.
First, we observe that compared with the vanilla SEAL method, in most cases, the proposed adversarial methods yield better performances over underrepresented domains. The average improvement is around 1.5% but the levels of improvement vary across different datasets. In the ogbn-products and PPI dataset, we observe much better performance from the adversarial models but in the ogbg-ppa dataset, the improvement is not very significant. We think the reason is that the feature distributions in the ogbg-ppa dataset are highly similar across domains. In this case, the impact of the lack of training data for one specific domain is very small. Therefore, the discriminator here cannot generate any benefits. We will discuss the details of our reasoning later.
The GAE method did not do well under this experiment. Unlike SEAL, GAE lacks the key node feature to describe the distance between each node to the target link. Under this few-shot learning setting, the limited number of nodes for training further restricts GAE’s ability to efficiently train the model.
The performances of the heuristic methods are low as well. One possible explanation is that the link prediction task in these three experiment dataset heavily depends on node/edge features. Since all
three heuristic methods only use graph topological structure data, they lack enough information to make good predictions.
Impact of Adversarial Training on Graph Embedding
To understand the real impact of the adversarial training on graph embedding, we use t-SNE (van der Maaten & Hinton (2008)) to reduce the dimension of graph embedding and visualize them in 2-D space. Figure 3 shows the changes of graph embedding for all 24 domains in PPI. From the figure, we can see the dots generated by the adversarial method are more evenly distributed across the 2-D space and this observation further confirms that our proposed method is working as expected.
In addition, by inspecting the t-SNE plot (Figure 4) for the ogbg-ppa dataset, we find that with the baseline SEAL method, the distribution of embedding points among different domains in ogbg-ppa are already evenly distributed. It shows that there are few inter-domain differences in this dataset. Although the protein graphs come from different organisms, it could be the case that all the graphs are assembled in a similar way, so they are topologically similar. It could also be possible that the 7-dimension edge feature extracted for this dataset are not reflecting domain differences.
In fact, comparing Figure 3 and Figure 4 shows that the t-SNE visualization of the graph embedding vector is a good tool for machine learning practitioners to decide when to adopt the proposed
approach. If there are distinguishable domain differences across domains based on the t-SNE plot, the proposed method will likely yield some improvements.
Training Curves, Run time and Impact of α
We also look into the impact of the adversarial design on the training curves. Figure 5 shows the smoothed training curves of 10 experiments with the ogbn-products dataset on domain 2. The shaded area represent the standard error. Note that we excluded the performances during the 20-step warmup stage to zoom on the most important area. As presented in Figure 5, the baseline performance quickly drops after 50 iterations even with the use of L2 regularization. This is mostly due to the fact that the evaluation domain is underrepresented in training data (recall that in the training data, we have 1 shot of data in the evaluation domain and 10 shots of data for every other domain). By enforcing the graph embedding from different domains to be similar using adversarial training, the graph embedding generated by our method is more likely to work well with the evaluation domain. Therefore we see the performances from all three adversarial experiments reach better best performances compared with the baseline SEAL method.
In addition, by increasing the value of α, we increase the impact of the discriminator of the target model. In this case, α = 3 seems to provide the best performance. However, if we look at the average best performances, the best α is in fact 5. A closer look into each individual training curve tells us that some of the best models were trained when α = 5. However, the large α value also makes the model less stable and it’s more likely to see the model collapse during the training process. Therefore, we conclude that when domain differences exist, higher impact of the discriminator helps the model reach better performances but it makes it more difficult to train. Eventually, there is a sweet spot to keep the balance between performance and model stability.
Figure 5 also tells us that compared with the baseline SEAL method, our method takes more steps to converge. Increasing the value of α increases the steps to converge as well. At the same time, it also takes more time to train the adversarial model because the computation of graph embedding needs to happen two times to collect gradients to train the target model and the discriminator. On average, it takes 1.134s to finish one training step for the ogbn-products dataset using baseline SEAL on an NVIDIA V100 graphic card, but it takes 1.406s (24.0 % increase) to train the network using adversarial method.
5 CONCLUSION
In this work, we propose an adversarial training-based approach to solve the few-shot graph link prediction problem. Compared with the state-of-arts method and other baselines, our method achieves better performance on underrepresented domains, especially when domain variance is not trivial. We further examine the changes of the graph embeddings using t-SNE and reveal the root of this improvement. At the same time, by showing the t-SNE visualizations for cases when the proposed method doesn’t provide benefits, we show that t-SNE is a very efficient tool for practitioners to determine benefit of our method. | 1. What is the focus and contribution of the paper on graph representation and link prediction?
2. What are the strengths of the proposed approach, particularly in terms of its application and novelty?
3. Are there any potential weaknesses or limitations of the method, especially when applied to other domains or datasets? | Summary Of The Paper
Review | Summary Of The Paper
This paper presents a novel approach to link prediction methods in graph representation of imbalanced domains using adversarial training. the result is an improved domain-agnostic graph embeddings. The approach takes is similar to few-shot learning that is popular in computer vision. Discussions on how to define shots for graphs, and how to design experiments for addressing imbalanced domains contribute to the novelty of the paper. Results on two Stanford Open Graph Benchmarks and the PPI dataset are given.
Review
Strengths: Application of few shot learning methodology to predicting graph links is novel. Will have applications beyond the benchmark datasets used in the paper. Weaknesses: I like this paper and so do not see any obvious weaknesses. |
ICLR | Title
Few-shot graph link prediction with domain adaptation
Abstract
Real-world link prediction problems often deal with data from multiple domains, where data may be highly skewed and imbalanced. Similar problems in computer vision are often referred to as Few-Shot Learning (FSL) problems. However, for graph link prediction, this problem has rarely been addressed and explored. In this work, we propose an adversarial training-based framework that aims at improving link prediction for highly skewed and imbalanced graphs. We introduce a domain discriminator on pairs of graph-level embedding. We then use the discriminator to improve the model in an adversarial way, such that the graph embeddings generated by the model are domain agnostic. We test our proposal on three benchmark datasets. Our results demonstrate that when domain differences exist, our method creates better graph embeddings that are more evenly distributed across domains and generate better prediction outcomes.
1 INTRODUCTION
Real-world link prediction problems often deal with graphs with nodes from various domains. For example, we can separate the items from a product purchase network into multiple domains such as books, electronics, luxury items. A financial transaction network can have data coming from various countries. Different domains may have different distributions, but more importantly, domains are often not balanced in most real-world datasets. For example, we may expect to see hundreds of thousands of nodes for books but only a few hundred for luxury items. There might also be billions of financial transactions in developed countries but much fewer in developing countries. We observe a similar phenomenon even in academic graph datasets. For example, in the ogbn-arxiv (Hu et al. (2020)) dataset, which includes pre-prints from 40 computer science subjects, papers from the most popular subject take up to 16.13% of all the nodes. In contrast, the least represented subject only has 0.02% of the data.
Training graph models with imbalanced data without precaution significantly downgrade model performances, especially for domains with fewer samples. The training process is overwhelmed by the popular domains that have abundant data. These popular domains act as noise, hampering the model to fit for small domains effectively. Despite the prevalence of this problem, current research on this topic is not adequate (Zhao et al. (2021); Shi et al. (2020)).
To improve model performance for small domains, we propose to train domain agnostic embedding that we can use for downstream tasks. The intuition is that if the latent representation learned by the graph model does not contain any domain-specific information, and therefore is domain-agnostic, it becomes more robust to domains with fewer data and even novel domains. We propose to use adversarial learning and force the graph model to learn domain agnostic embeddings.
Our idea of building domain-agnostic features across domains aligns with the few-shot learning (FSL) problem in computer vision (CV). FSL (Fei-Fei et al. (2006); Fink (2005)) refers to the type of ML algorithm that works with limited labeled data. A common FSL technique is to learn domainagnostic features first (with/without external data) and then let the model converge fast on to domains with limited shots (Long & Wang (2015)). It is relatively easier for CV tasks because stacked convolutional neural networks used in CV usually learn features in a general-to-specific sequence. In contrast, as stacking graph neural networks (GNN) layers only explore larger neighborhoods, the same method couldn’t be applied directly to graph data.
In addition, the CV literature has explored the idea of using adversarial learning to build better domain agnostic features. Domain-Adversarial Neural Network (DANN) (Ganin et al. (2016)) first introduces the idea of using a discriminator to align and separate semantic probability distributions. Further research (Motiian et al. (2017)) shows that it helps reduce the number of samples needed for each domain/class. Essentially, our work follows this path but further improves it with the enclosing subgraph idea to achieve better performances for graph link predictions.
We find that the method of extracting enclosing subgraphs, proposed in the SEAL model Zhang & Chen (2018) suits well for our setting because it not only boosts the number of trainable samples but also gives them different topological features. It is similar to what “cropping” does for images. CV Research has shown that data augmentation method, such as cropping (Qi et al. (2018); Zhang et al. (2018b)), introduces invariances for models to capture and is effective for FSL problems.
We summarize our main contributions as follows:
• We propose the idea of learning domain-agnostic graph embedding to improve the quality of link prediction by using adversarial learning and enclosing subgraphs.
• We define the concepts of “shots” for graph data, making it significantly easier to train, describe, and analyze graphs with multiple imbalanced domains.
• We use t-SNE plots to demonstrate the effectiveness of our method. It also allows practitioners to decide the expected gain from our proposed ideas.
2 RELATED WORKS
2.1 LINK PREDICTION
Link prediction focuses on predicting future or missing links between a pair of nodes. The current state-of-the-arts link prediction method is SEAL (Zhang & Chen (2018)). The core idea of SEAL is to extract enclosing subgraphs for a set of sampled positive and negative links. To be more specific, a positive link is a pair of nodes connected with an edge, and a negative link is a pair of nodes that are not connected. An enclosing subgraph is the union set of h-hop neighboring nodes of the pair of nodes. This data-preprocessing step lets SEAL transform the link prediction problem to a (sub)graph classification problem. It uses graph neural networks (GNNs) to generate the graph embedding, which feeds into a classifier and generates new predictions. Other than SEAL, Graph Auto-Encoder (GAE, Kipf & Welling (2016)) is an alternative approach. GAE works by generating node embedding for all nodes and then model the link likelihood based on the concatenated node embedding vectors for any node pairs. However, recent researches have shown this method heavily relies on the assumption that similar nodes have similar connections. In addition to these GNN-based methods, traditional heuristic methods, such as common neighbor (CN) and Adamic-Adar (AA), provide a completely different alternative. Given a pair of nodes, these non-parametric methods describe the topological structure of the surrounding neighborhood with a score. The advantage is that these methods could be used directly without training, but it’s challenging to utilize node/edge features with these methods.
2.2 FEW-SHOT LEARNING AND DOMAIN ADAPTATION
Few-shot learning is a special type of machine learning problem, which aims at obtaining good learning performances when supervised evidence information is limited (Wang et al. (2020)). Common FSL methods include data augmentation (cropping, rotating, generative models, etc) and metalearning (Sun et al. (2019)). Domain Adaptation is a type of Transfer Learning (TL) technique. It focuses on transferring learned knowledge from one domain to another while maintaining good performances. The core idea of domain adaptation is to learn a domain invariant representation, which is usually done by maximizing mutual information (Hjelm et al. (2019)) or training in an adversarial network (Goodfellow et al. (2014)).
Some recent works focus on the problem of few-shot learning with domain adaptation. The work from Motiian et al. (2017) also uses adversarial networks as the solution, but it introduced a 4-class confusion matrix classifier as the discriminator. The work from Zhao et al. (2020) explicitly added source/target per-class separation before doing embedding feature learning with domain adaptation.
2.3 ADVERSARIAL LEARNING
To learn domain agnostic models, we need to ensure that the learned graph embedding does not contain domain-specific information. To achieve this, a commonly used technique is adversarial learning, in which two models are trained against each other and play a minimax game based on game theory. For example, in the popular Generative Adversarial Network (GAN) (Goodfellow et al. (2014)), adversarial learning helps generators build more realistic images by having a discriminator that identifies whether an image is synthetic or real. There have been prior works done on using adversarial training on graph-related tasks. The work from Wang et al. (2018) uses GAN to improve various graph-related tasks, such as link prediction and node property prediction, by generating “fake” graphs. The work from Lei et al. (2019) also uses GAN to solve the challenging temporal link prediction problem for weighted dynamic networks.
3 METHODOLOGY
3.1 PROBLEM DEFINITION
We define the few-shot link prediction problem with imbalanced domains as follows: in a given graph space S, we have access to graphs from m (m ∈ N,m ≥ 2) number of domains. Different domains may have variations in their marginal distributions. Among them domains,m−1 (k ∈ N+) domains have k number of small graphs, which we define as graph “shots”. The last domain only has 1 shot. Our objective is to learn a link prediction model that leverages the domain differences to improve the performance in the domain with 1 shot of data.
3.2 NOTATIONS
In terms of graph-related notations, each graph is denoted by G = (V,E, d), where V is the set of vertices, E is the set of edges and d is the type of domain, which could be a one-hot encoded vector with one extra digit representing any unseen domain. In the adjacency matrix A of G, Au,v = 1 if edge (u, v) ∈ E, and otherwise Au,v = 0. For any node u, we use N (u) to denote the set of neighboring nodes of u. In addition, we use superscript as a shorthand to denote the domain class. For example, for a graph from domain i, we would denote it as Gi.
3.3 DOMAINS
For our setting, we have a total of m domains and each domain has one or more graphs or subgraphs Dk = {Gk1 , ...}mk=1, where Dk denotes the kth domain and graph Gk1 is a realization of random variable Gk. The effective of our method compared to baseline is based on the assumption that the distribution of the random variable G is different across different domains p(Gk) 6= p(Gk′) if k′ 6= k.
3.4 LINK PREDICTION
Here is the general form of any link prediction models: given (u, v,G), where u and v are a pair of nodes in graph G, the likelihood of having a link between u and v, yu,v , is modeled by function f : (u, v,G) → yu,v . This function could be trained with a union set of positive and negative samples. Positive samples are sampled from existing edges while negative samples are built with node pairs with u and v′, v′ /∈ N (u). Based on the ideas from the SEAL framework, the first step is to extract an h-hop enclosing subgraph around edge (u, v). We denote this subgraph as Su,v and the process of generating the enclosing subgraph as s(u, v,G) = Su,v . Function f could be rewritten as f : Su,v → yu,v . In general, f is composed with two separated functions g and c and could be written as f = g ◦ c . Function g generates the graph embedding vector g : Su,v → hSu,v . Function c takes in the graph embedding and generates the prediction ˆyu,v .
Here we have the supervised loss function.
Lcls(g, c) = E[`(c(g(Su,v)), yu,v)]
= − 1 N
∑ yu,v · log(c(g(Su,v))) + (1− yu,v) · log(1− c(g(Su,v)))
, where E is the statistical expectation of any loss function `. In this case, ` is the standard binary cross-entropy loss.
The input of the discriminator D is a pair of graph embedding hi and hj generated by g. As stated above, here we use superscript to denote the domain information. The objective is to determine whether these two subgraphs are from the same domain. Since we know from which graph G these subgraphs are generated, and we know the domain information of each G, we can easily get the ground truth di,j . The loss function here could be binary cross-entropy as well. An alternative design is to let the discriminator predict the exact domain that the graph embedding is from. We select the current one because it is easier to apply it to novel domains.
Ldis(D) = E[`(D(g(Siu,v), g(S j u′,v′)), d i,j)]
Given the two loss functions above, we could get the total loss for the input of a pair of subgraphs from domain i and j. Since our objective is to make sure the graph embedding does not contain domain-specific information, we would like to minimize the supervised training loss while maximizing the discriminator’s loss.
Ltotal(g, c) = E[ ∑m i=1 `(c(g(S i u,v)), y i u,v)− α · ∑m i=1 ∑m j=1 `(D(g(S i u,v), g(S j u′,v′)), d i,j)],
where α is hyperparameter to adjust the weight of the discrimination term compared to the supervised loss.
4 EXPERIMENTS
4.1 DATASETS
In this study, we compared the model performances using three different datasets, including ogbnproducts, ogbg-ppa and the protein-protein interaction (PPI) dataset.
Both the ogbn-products and ogbg-ppa dataset comes from the Stanford Open Graph Benchmark (Hu et al. (2020)). There is a specific category of benchmark dataset on OGB for link prediction. However, we could not use those link-prediction benchmark datasets as they do not contain domain information. Therefore, we select ogbn-products and ogbg-ppa from the node/graph property prediction task and convert them into link prediction tasks. The protein-protein interaction (PPI) dataset comes from the GNN benchmark dataset (Dwivedi et al. (2020)) and originally published together with the GraphSage paper(Hamilton et al. (2018)). Its original purpose is also node property prediction and we re-purpose it as a link-prediction problem.
ogbn-products The ogbn-products dataset is a single graph with 2,449,029 nodes and 61,859,140 edges, representing the Amazon product co-purchasing network in 47 categories. Node feature is a 100-dimension word embedding vector extracted from product description. During the data preprocessing, we select 23 domains with more than 10,000 nodes and split the nodes into 23 subgraphs, each of which represents a single domain. Then, within each domain, we randomly selected 40% of the nodes as the test data and used the remaining 60% to generate training “shots”. To ensure the robustness of our results, we run the random sampling 10 times to generate 10 sets of independent experiment data. In each experiment, within each domain, we start with a random node, run Breadth-first search (BFS) until the cumulated amount of nodes exceed 5% of all the nodes in the training graph (in other words, 3% in the original domain subgraph). All the nodes visited by BFS are considered as a “shot”. Then we start from another random node from the rest of the graph, repeat the BFS process until a total of 10 shots are generated for each domain. To build the few-shot learning with an imbalanced domain setting, for 22 out of 23 domains, we arrange all 10 shots to the training set and for the last domain, we arrange 1 shot to the training set. The training set is then randomized and fed into the model. The evaluation is done for the imbalanced domain only.
ogbg-ppa The ogbg-ppa dataset include 158,100 graphs representing the protein-protein association network in 1581 different species that cover 37 broad taxonomic groups. The average number of nodes is 243.4 and the average number of edges is 2,266.1. Edges in this dataset are associated with a 7-dimensional feature vector . We chose to pick broad taxonomic groups as our criteria to separate domains. We select 50 graphs from each domain as our test set. For our training set, we selected 10 graphs from each domain to build the data for one experiment and we created 10 independent experiments to keep our result robust. Similar with what we do for ogbn-products, we put 1-shot of data for the evaluation domain into the training set and 10-shot for all the other domains. Note that the graphs in ogbg-ppa do not have node feature data. Instead, only edge feature data is available.
PPI The PPI dataset is another protein-related dataset. It includes 24 graphs representing the human protein-protein interaction network in 24 types of human tissues. The average number of nodes is 2372.7 and the average number of edges is 68508.7. Each node has 50 features. We follow the same procedure as we use for the ogbn-products dataset to split test and train, generate shots and produce experiment data.
4.2 BASELINE METHODS
We compared the performances of our proposed methods with several commonly used linkprediction methods, including SEAL, Graph AutoEncoder (GAE) and heuristic methods including Common Neighbor (CN), Adamic-Adar (AA) and Personalized PageRank (PPR).
SEAL SEAL is the current state-of-the-art method for link prediction and provides a lot of inspiration to this work. The core idea is to create enclosing subgraphs around a selection of positive and negative node pairs. The distance between the target pair of nodes and all the other nodes is calculated within each subgraph. Then, SEAL uses a GNN to compute graph-level embedding for each subgraph and use the graph embedding to predict the probability of having a link between each node pair.
GAE GAE employs a standard GCN to generate updated node embedding. For each positive and negative node pair, the node embedding vectors for the pair of nodes are concatenated and then used to predict the probability of having a link.
CN As a heuristic method, there is no training process needed for CN. The common neighbor method computes the total number of common neighbors between any node pairs and uses the score to predict the probability of having a link. The mathematical formula for CN is usually denoted by
CN = |N (x) ⋂ N (y)|
AA Adamic-Adar is another heuristic method. Unlike CN, instead of looking into the first-order neighbor of a node pair, AA pays attention to the second-order neighbor of the node pair. Its mathematical formula could be denoted by
AA = ∑ z∈N (x) ⋂ N (y) 1 log(|N (z)|)
PPR Personalize PageRank computes the stationary distribution of a random walker starting at x and ending at y with restarts. Here since we are focusing on bidirectional graphs, we need to compute the scores for both from x to y and from y to x. The mathematical formula is denoted by the following equation, where q stands for the stationary distribution.
PPR = qxy + qyx
4.3 EXPERIMENT CONFIGURATION
All the heuristic methods (CN, AA & PPR) can be used directly without training. To ensure the evaluation is fair, we tested all six methods on the same set of node pairs for evaluation.
All three NN-based methods (our method, SEAL & GAE) requires training on a selection of positive and negative node pairs. To ensure a fair comparison, we trained all these three methods on the same sets of node pairs. As mentioned before, we generated 10 sets of experiment data for training. The final results for these NN-based methods are average performances in all 10 experiments. In addition, to reduce the size of training data, we sample a proportion of positive node pairs and then sample the same amount of negative node pairs for each dataset. The exact proportions for each dataset is consistent across all three NN-based methods and is in-fact determined based on the number of edges and the size of features.
In all three GNN-based methods, we use a three layers GCN with 32 neurons on each layer. Classic GCN models only take node features, and it is what we use for ogbn-products and PPI. However, as mentioned earlier, the ogbg-ppa dataset only has edge features. To utilize edge features, we modified the message passing rule such that the message includes the concatenated node feature, which in this case is the calculated distance value between each node and the target node pair, and edge features. In our method and SEAL, where a graph level pooling is needed, we use the pooling method proposed in DGCNN (Zhang et al. (2018a)), which composes of a sort pooling layer followed by a two-layer convolution network. In this experiment, we use the sort pooling layer to pull out the sorted features from the top 10 nodes (sorted by the largest value of each node).
For our method and SEAL, the prediction of link probability starts with a 32-length long graph embedding vector. After the pooling layer, we send the graph embedding to a dense layer with 32 neurons and another dense layer to generate the output. For GAE, the prediction of link probability requires the node level features of the two target nodes. In our implementation, after the GNN step, we simply look up the node embedding vectors for the two target nodes and concatenate the two vectors. The merged embedding is then sent to the same MLP as described earlier to generate the prediction.
In our methods, we use a three layer MLP with 32 neurons as the discriminator for the graph embedding. The input of the discriminator is the concatenation of two random graph embedding and the target of the discriminator is to predict whether these two embedding come from the same domain. In the actual implementation, because we train the model using a mini-batch of 32, this could be easily done by separating one batch of 32 embedding vectors to two groups of 16 vectors and comparing between these two groups.
In addition, we implement a 20 iteration warm-up stage for both our method and SEAL. In the warmup stage, the models are trained without discriminator. The purpose of setting up the warm-up stage is to make it easier to train the adversarial model. As reported in literature, adversarial models are harder to train. If we train the model and the discriminator together from the very beginning, a bad initialization on the model may make it too difficult for the model to confuse the discriminator, which will lead to the “model collapsing”. A 20 step warm-up stage may not always bring the model to the best spot but it offers a good starting point in most cases.
All the models are trained with a learning rate at 0.0001 with L2 regularization at 0.001 for 250 epochs. The discriminators are trained with a learning rate at 0.00001. All the experiments are trained and tested on a NVIDIA V100 card.
4.4 EXPERIMENT RESULTS
In our proposed method, α is an important hyperparameter because it controls the impact of the discriminator on the model. In this experiment, we select α from 1, 3, and 5 and we compare the results with all of our baseline methods. The results are displayed in Table 1 and Figure 2.
First, we observe that compared with the vanilla SEAL method, in most cases, the proposed adversarial methods yield better performances over underrepresented domains. The average improvement is around 1.5% but the levels of improvement vary across different datasets. In the ogbn-products and PPI dataset, we observe much better performance from the adversarial models but in the ogbg-ppa dataset, the improvement is not very significant. We think the reason is that the feature distributions in the ogbg-ppa dataset are highly similar across domains. In this case, the impact of the lack of training data for one specific domain is very small. Therefore, the discriminator here cannot generate any benefits. We will discuss the details of our reasoning later.
The GAE method did not do well under this experiment. Unlike SEAL, GAE lacks the key node feature to describe the distance between each node to the target link. Under this few-shot learning setting, the limited number of nodes for training further restricts GAE’s ability to efficiently train the model.
The performances of the heuristic methods are low as well. One possible explanation is that the link prediction task in these three experiment dataset heavily depends on node/edge features. Since all
three heuristic methods only use graph topological structure data, they lack enough information to make good predictions.
Impact of Adversarial Training on Graph Embedding
To understand the real impact of the adversarial training on graph embedding, we use t-SNE (van der Maaten & Hinton (2008)) to reduce the dimension of graph embedding and visualize them in 2-D space. Figure 3 shows the changes of graph embedding for all 24 domains in PPI. From the figure, we can see the dots generated by the adversarial method are more evenly distributed across the 2-D space and this observation further confirms that our proposed method is working as expected.
In addition, by inspecting the t-SNE plot (Figure 4) for the ogbg-ppa dataset, we find that with the baseline SEAL method, the distribution of embedding points among different domains in ogbg-ppa are already evenly distributed. It shows that there are few inter-domain differences in this dataset. Although the protein graphs come from different organisms, it could be the case that all the graphs are assembled in a similar way, so they are topologically similar. It could also be possible that the 7-dimension edge feature extracted for this dataset are not reflecting domain differences.
In fact, comparing Figure 3 and Figure 4 shows that the t-SNE visualization of the graph embedding vector is a good tool for machine learning practitioners to decide when to adopt the proposed
approach. If there are distinguishable domain differences across domains based on the t-SNE plot, the proposed method will likely yield some improvements.
Training Curves, Run time and Impact of α
We also look into the impact of the adversarial design on the training curves. Figure 5 shows the smoothed training curves of 10 experiments with the ogbn-products dataset on domain 2. The shaded area represent the standard error. Note that we excluded the performances during the 20-step warmup stage to zoom on the most important area. As presented in Figure 5, the baseline performance quickly drops after 50 iterations even with the use of L2 regularization. This is mostly due to the fact that the evaluation domain is underrepresented in training data (recall that in the training data, we have 1 shot of data in the evaluation domain and 10 shots of data for every other domain). By enforcing the graph embedding from different domains to be similar using adversarial training, the graph embedding generated by our method is more likely to work well with the evaluation domain. Therefore we see the performances from all three adversarial experiments reach better best performances compared with the baseline SEAL method.
In addition, by increasing the value of α, we increase the impact of the discriminator of the target model. In this case, α = 3 seems to provide the best performance. However, if we look at the average best performances, the best α is in fact 5. A closer look into each individual training curve tells us that some of the best models were trained when α = 5. However, the large α value also makes the model less stable and it’s more likely to see the model collapse during the training process. Therefore, we conclude that when domain differences exist, higher impact of the discriminator helps the model reach better performances but it makes it more difficult to train. Eventually, there is a sweet spot to keep the balance between performance and model stability.
Figure 5 also tells us that compared with the baseline SEAL method, our method takes more steps to converge. Increasing the value of α increases the steps to converge as well. At the same time, it also takes more time to train the adversarial model because the computation of graph embedding needs to happen two times to collect gradients to train the target model and the discriminator. On average, it takes 1.134s to finish one training step for the ogbn-products dataset using baseline SEAL on an NVIDIA V100 graphic card, but it takes 1.406s (24.0 % increase) to train the network using adversarial method.
5 CONCLUSION
In this work, we propose an adversarial training-based approach to solve the few-shot graph link prediction problem. Compared with the state-of-arts method and other baselines, our method achieves better performance on underrepresented domains, especially when domain variance is not trivial. We further examine the changes of the graph embeddings using t-SNE and reveal the root of this improvement. At the same time, by showing the t-SNE visualizations for cases when the proposed method doesn’t provide benefits, we show that t-SNE is a very efficient tool for practitioners to determine benefit of our method. | 1. What is the focus and contribution of the paper regarding graph link prediction?
2. What are the strengths of the proposed approach, particularly in terms of adversarial training?
3. What are the weaknesses of the paper, especially in comparison with other domain adaptation methods like DANN?
4. Do you have any concerns about the experimental settings and their relevance to real-world scenarios?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
The paper proposes a method to resolve the issue of domain imbalanced dataset in graph link prediction. The proposed method uses adversarial training to generate graph embeddings that are domain agnostic in order to facilitate transfer learning cross domains. The paper uses T-SNE plot of graph embedding to gain insights of the best scenarios for applying the proposed methods. The paper compares the proposed method with heuristics-based and GNN-based domain adaptation methods using experiments.
Review
Strength The proposed domain adaptive method of using adversarial training is well described. The experimental results support the effectiveness of the proposed method. The T-SNE plots add good insights of the proposed method, can facilitate its applicability.
Weakness The paper hasn’t discussed the key difference between the proposed method and the DANN (Domain-Adversarial NN). It made it harder to assess the novelty of the proposed method. The paper seems like a direct application of DANN on a new problem, link prediction using GNN.
The paper’s experimental settings might not be close to a real world scenario. The paper selects 10 samples for other domains and only 1 sample for an imbalanced domain. How likely is this a real world scenario? This simple scenario makes it hard to justify the universal applicability of the proposed method. |
ICLR | Title
Few-shot graph link prediction with domain adaptation
Abstract
Real-world link prediction problems often deal with data from multiple domains, where data may be highly skewed and imbalanced. Similar problems in computer vision are often referred to as Few-Shot Learning (FSL) problems. However, for graph link prediction, this problem has rarely been addressed and explored. In this work, we propose an adversarial training-based framework that aims at improving link prediction for highly skewed and imbalanced graphs. We introduce a domain discriminator on pairs of graph-level embedding. We then use the discriminator to improve the model in an adversarial way, such that the graph embeddings generated by the model are domain agnostic. We test our proposal on three benchmark datasets. Our results demonstrate that when domain differences exist, our method creates better graph embeddings that are more evenly distributed across domains and generate better prediction outcomes.
1 INTRODUCTION
Real-world link prediction problems often deal with graphs with nodes from various domains. For example, we can separate the items from a product purchase network into multiple domains such as books, electronics, luxury items. A financial transaction network can have data coming from various countries. Different domains may have different distributions, but more importantly, domains are often not balanced in most real-world datasets. For example, we may expect to see hundreds of thousands of nodes for books but only a few hundred for luxury items. There might also be billions of financial transactions in developed countries but much fewer in developing countries. We observe a similar phenomenon even in academic graph datasets. For example, in the ogbn-arxiv (Hu et al. (2020)) dataset, which includes pre-prints from 40 computer science subjects, papers from the most popular subject take up to 16.13% of all the nodes. In contrast, the least represented subject only has 0.02% of the data.
Training graph models with imbalanced data without precaution significantly downgrade model performances, especially for domains with fewer samples. The training process is overwhelmed by the popular domains that have abundant data. These popular domains act as noise, hampering the model to fit for small domains effectively. Despite the prevalence of this problem, current research on this topic is not adequate (Zhao et al. (2021); Shi et al. (2020)).
To improve model performance for small domains, we propose to train domain agnostic embedding that we can use for downstream tasks. The intuition is that if the latent representation learned by the graph model does not contain any domain-specific information, and therefore is domain-agnostic, it becomes more robust to domains with fewer data and even novel domains. We propose to use adversarial learning and force the graph model to learn domain agnostic embeddings.
Our idea of building domain-agnostic features across domains aligns with the few-shot learning (FSL) problem in computer vision (CV). FSL (Fei-Fei et al. (2006); Fink (2005)) refers to the type of ML algorithm that works with limited labeled data. A common FSL technique is to learn domainagnostic features first (with/without external data) and then let the model converge fast on to domains with limited shots (Long & Wang (2015)). It is relatively easier for CV tasks because stacked convolutional neural networks used in CV usually learn features in a general-to-specific sequence. In contrast, as stacking graph neural networks (GNN) layers only explore larger neighborhoods, the same method couldn’t be applied directly to graph data.
In addition, the CV literature has explored the idea of using adversarial learning to build better domain agnostic features. Domain-Adversarial Neural Network (DANN) (Ganin et al. (2016)) first introduces the idea of using a discriminator to align and separate semantic probability distributions. Further research (Motiian et al. (2017)) shows that it helps reduce the number of samples needed for each domain/class. Essentially, our work follows this path but further improves it with the enclosing subgraph idea to achieve better performances for graph link predictions.
We find that the method of extracting enclosing subgraphs, proposed in the SEAL model Zhang & Chen (2018) suits well for our setting because it not only boosts the number of trainable samples but also gives them different topological features. It is similar to what “cropping” does for images. CV Research has shown that data augmentation method, such as cropping (Qi et al. (2018); Zhang et al. (2018b)), introduces invariances for models to capture and is effective for FSL problems.
We summarize our main contributions as follows:
• We propose the idea of learning domain-agnostic graph embedding to improve the quality of link prediction by using adversarial learning and enclosing subgraphs.
• We define the concepts of “shots” for graph data, making it significantly easier to train, describe, and analyze graphs with multiple imbalanced domains.
• We use t-SNE plots to demonstrate the effectiveness of our method. It also allows practitioners to decide the expected gain from our proposed ideas.
2 RELATED WORKS
2.1 LINK PREDICTION
Link prediction focuses on predicting future or missing links between a pair of nodes. The current state-of-the-arts link prediction method is SEAL (Zhang & Chen (2018)). The core idea of SEAL is to extract enclosing subgraphs for a set of sampled positive and negative links. To be more specific, a positive link is a pair of nodes connected with an edge, and a negative link is a pair of nodes that are not connected. An enclosing subgraph is the union set of h-hop neighboring nodes of the pair of nodes. This data-preprocessing step lets SEAL transform the link prediction problem to a (sub)graph classification problem. It uses graph neural networks (GNNs) to generate the graph embedding, which feeds into a classifier and generates new predictions. Other than SEAL, Graph Auto-Encoder (GAE, Kipf & Welling (2016)) is an alternative approach. GAE works by generating node embedding for all nodes and then model the link likelihood based on the concatenated node embedding vectors for any node pairs. However, recent researches have shown this method heavily relies on the assumption that similar nodes have similar connections. In addition to these GNN-based methods, traditional heuristic methods, such as common neighbor (CN) and Adamic-Adar (AA), provide a completely different alternative. Given a pair of nodes, these non-parametric methods describe the topological structure of the surrounding neighborhood with a score. The advantage is that these methods could be used directly without training, but it’s challenging to utilize node/edge features with these methods.
2.2 FEW-SHOT LEARNING AND DOMAIN ADAPTATION
Few-shot learning is a special type of machine learning problem, which aims at obtaining good learning performances when supervised evidence information is limited (Wang et al. (2020)). Common FSL methods include data augmentation (cropping, rotating, generative models, etc) and metalearning (Sun et al. (2019)). Domain Adaptation is a type of Transfer Learning (TL) technique. It focuses on transferring learned knowledge from one domain to another while maintaining good performances. The core idea of domain adaptation is to learn a domain invariant representation, which is usually done by maximizing mutual information (Hjelm et al. (2019)) or training in an adversarial network (Goodfellow et al. (2014)).
Some recent works focus on the problem of few-shot learning with domain adaptation. The work from Motiian et al. (2017) also uses adversarial networks as the solution, but it introduced a 4-class confusion matrix classifier as the discriminator. The work from Zhao et al. (2020) explicitly added source/target per-class separation before doing embedding feature learning with domain adaptation.
2.3 ADVERSARIAL LEARNING
To learn domain agnostic models, we need to ensure that the learned graph embedding does not contain domain-specific information. To achieve this, a commonly used technique is adversarial learning, in which two models are trained against each other and play a minimax game based on game theory. For example, in the popular Generative Adversarial Network (GAN) (Goodfellow et al. (2014)), adversarial learning helps generators build more realistic images by having a discriminator that identifies whether an image is synthetic or real. There have been prior works done on using adversarial training on graph-related tasks. The work from Wang et al. (2018) uses GAN to improve various graph-related tasks, such as link prediction and node property prediction, by generating “fake” graphs. The work from Lei et al. (2019) also uses GAN to solve the challenging temporal link prediction problem for weighted dynamic networks.
3 METHODOLOGY
3.1 PROBLEM DEFINITION
We define the few-shot link prediction problem with imbalanced domains as follows: in a given graph space S, we have access to graphs from m (m ∈ N,m ≥ 2) number of domains. Different domains may have variations in their marginal distributions. Among them domains,m−1 (k ∈ N+) domains have k number of small graphs, which we define as graph “shots”. The last domain only has 1 shot. Our objective is to learn a link prediction model that leverages the domain differences to improve the performance in the domain with 1 shot of data.
3.2 NOTATIONS
In terms of graph-related notations, each graph is denoted by G = (V,E, d), where V is the set of vertices, E is the set of edges and d is the type of domain, which could be a one-hot encoded vector with one extra digit representing any unseen domain. In the adjacency matrix A of G, Au,v = 1 if edge (u, v) ∈ E, and otherwise Au,v = 0. For any node u, we use N (u) to denote the set of neighboring nodes of u. In addition, we use superscript as a shorthand to denote the domain class. For example, for a graph from domain i, we would denote it as Gi.
3.3 DOMAINS
For our setting, we have a total of m domains and each domain has one or more graphs or subgraphs Dk = {Gk1 , ...}mk=1, where Dk denotes the kth domain and graph Gk1 is a realization of random variable Gk. The effective of our method compared to baseline is based on the assumption that the distribution of the random variable G is different across different domains p(Gk) 6= p(Gk′) if k′ 6= k.
3.4 LINK PREDICTION
Here is the general form of any link prediction models: given (u, v,G), where u and v are a pair of nodes in graph G, the likelihood of having a link between u and v, yu,v , is modeled by function f : (u, v,G) → yu,v . This function could be trained with a union set of positive and negative samples. Positive samples are sampled from existing edges while negative samples are built with node pairs with u and v′, v′ /∈ N (u). Based on the ideas from the SEAL framework, the first step is to extract an h-hop enclosing subgraph around edge (u, v). We denote this subgraph as Su,v and the process of generating the enclosing subgraph as s(u, v,G) = Su,v . Function f could be rewritten as f : Su,v → yu,v . In general, f is composed with two separated functions g and c and could be written as f = g ◦ c . Function g generates the graph embedding vector g : Su,v → hSu,v . Function c takes in the graph embedding and generates the prediction ˆyu,v .
Here we have the supervised loss function.
Lcls(g, c) = E[`(c(g(Su,v)), yu,v)]
= − 1 N
∑ yu,v · log(c(g(Su,v))) + (1− yu,v) · log(1− c(g(Su,v)))
, where E is the statistical expectation of any loss function `. In this case, ` is the standard binary cross-entropy loss.
The input of the discriminator D is a pair of graph embedding hi and hj generated by g. As stated above, here we use superscript to denote the domain information. The objective is to determine whether these two subgraphs are from the same domain. Since we know from which graph G these subgraphs are generated, and we know the domain information of each G, we can easily get the ground truth di,j . The loss function here could be binary cross-entropy as well. An alternative design is to let the discriminator predict the exact domain that the graph embedding is from. We select the current one because it is easier to apply it to novel domains.
Ldis(D) = E[`(D(g(Siu,v), g(S j u′,v′)), d i,j)]
Given the two loss functions above, we could get the total loss for the input of a pair of subgraphs from domain i and j. Since our objective is to make sure the graph embedding does not contain domain-specific information, we would like to minimize the supervised training loss while maximizing the discriminator’s loss.
Ltotal(g, c) = E[ ∑m i=1 `(c(g(S i u,v)), y i u,v)− α · ∑m i=1 ∑m j=1 `(D(g(S i u,v), g(S j u′,v′)), d i,j)],
where α is hyperparameter to adjust the weight of the discrimination term compared to the supervised loss.
4 EXPERIMENTS
4.1 DATASETS
In this study, we compared the model performances using three different datasets, including ogbnproducts, ogbg-ppa and the protein-protein interaction (PPI) dataset.
Both the ogbn-products and ogbg-ppa dataset comes from the Stanford Open Graph Benchmark (Hu et al. (2020)). There is a specific category of benchmark dataset on OGB for link prediction. However, we could not use those link-prediction benchmark datasets as they do not contain domain information. Therefore, we select ogbn-products and ogbg-ppa from the node/graph property prediction task and convert them into link prediction tasks. The protein-protein interaction (PPI) dataset comes from the GNN benchmark dataset (Dwivedi et al. (2020)) and originally published together with the GraphSage paper(Hamilton et al. (2018)). Its original purpose is also node property prediction and we re-purpose it as a link-prediction problem.
ogbn-products The ogbn-products dataset is a single graph with 2,449,029 nodes and 61,859,140 edges, representing the Amazon product co-purchasing network in 47 categories. Node feature is a 100-dimension word embedding vector extracted from product description. During the data preprocessing, we select 23 domains with more than 10,000 nodes and split the nodes into 23 subgraphs, each of which represents a single domain. Then, within each domain, we randomly selected 40% of the nodes as the test data and used the remaining 60% to generate training “shots”. To ensure the robustness of our results, we run the random sampling 10 times to generate 10 sets of independent experiment data. In each experiment, within each domain, we start with a random node, run Breadth-first search (BFS) until the cumulated amount of nodes exceed 5% of all the nodes in the training graph (in other words, 3% in the original domain subgraph). All the nodes visited by BFS are considered as a “shot”. Then we start from another random node from the rest of the graph, repeat the BFS process until a total of 10 shots are generated for each domain. To build the few-shot learning with an imbalanced domain setting, for 22 out of 23 domains, we arrange all 10 shots to the training set and for the last domain, we arrange 1 shot to the training set. The training set is then randomized and fed into the model. The evaluation is done for the imbalanced domain only.
ogbg-ppa The ogbg-ppa dataset include 158,100 graphs representing the protein-protein association network in 1581 different species that cover 37 broad taxonomic groups. The average number of nodes is 243.4 and the average number of edges is 2,266.1. Edges in this dataset are associated with a 7-dimensional feature vector . We chose to pick broad taxonomic groups as our criteria to separate domains. We select 50 graphs from each domain as our test set. For our training set, we selected 10 graphs from each domain to build the data for one experiment and we created 10 independent experiments to keep our result robust. Similar with what we do for ogbn-products, we put 1-shot of data for the evaluation domain into the training set and 10-shot for all the other domains. Note that the graphs in ogbg-ppa do not have node feature data. Instead, only edge feature data is available.
PPI The PPI dataset is another protein-related dataset. It includes 24 graphs representing the human protein-protein interaction network in 24 types of human tissues. The average number of nodes is 2372.7 and the average number of edges is 68508.7. Each node has 50 features. We follow the same procedure as we use for the ogbn-products dataset to split test and train, generate shots and produce experiment data.
4.2 BASELINE METHODS
We compared the performances of our proposed methods with several commonly used linkprediction methods, including SEAL, Graph AutoEncoder (GAE) and heuristic methods including Common Neighbor (CN), Adamic-Adar (AA) and Personalized PageRank (PPR).
SEAL SEAL is the current state-of-the-art method for link prediction and provides a lot of inspiration to this work. The core idea is to create enclosing subgraphs around a selection of positive and negative node pairs. The distance between the target pair of nodes and all the other nodes is calculated within each subgraph. Then, SEAL uses a GNN to compute graph-level embedding for each subgraph and use the graph embedding to predict the probability of having a link between each node pair.
GAE GAE employs a standard GCN to generate updated node embedding. For each positive and negative node pair, the node embedding vectors for the pair of nodes are concatenated and then used to predict the probability of having a link.
CN As a heuristic method, there is no training process needed for CN. The common neighbor method computes the total number of common neighbors between any node pairs and uses the score to predict the probability of having a link. The mathematical formula for CN is usually denoted by
CN = |N (x) ⋂ N (y)|
AA Adamic-Adar is another heuristic method. Unlike CN, instead of looking into the first-order neighbor of a node pair, AA pays attention to the second-order neighbor of the node pair. Its mathematical formula could be denoted by
AA = ∑ z∈N (x) ⋂ N (y) 1 log(|N (z)|)
PPR Personalize PageRank computes the stationary distribution of a random walker starting at x and ending at y with restarts. Here since we are focusing on bidirectional graphs, we need to compute the scores for both from x to y and from y to x. The mathematical formula is denoted by the following equation, where q stands for the stationary distribution.
PPR = qxy + qyx
4.3 EXPERIMENT CONFIGURATION
All the heuristic methods (CN, AA & PPR) can be used directly without training. To ensure the evaluation is fair, we tested all six methods on the same set of node pairs for evaluation.
All three NN-based methods (our method, SEAL & GAE) requires training on a selection of positive and negative node pairs. To ensure a fair comparison, we trained all these three methods on the same sets of node pairs. As mentioned before, we generated 10 sets of experiment data for training. The final results for these NN-based methods are average performances in all 10 experiments. In addition, to reduce the size of training data, we sample a proportion of positive node pairs and then sample the same amount of negative node pairs for each dataset. The exact proportions for each dataset is consistent across all three NN-based methods and is in-fact determined based on the number of edges and the size of features.
In all three GNN-based methods, we use a three layers GCN with 32 neurons on each layer. Classic GCN models only take node features, and it is what we use for ogbn-products and PPI. However, as mentioned earlier, the ogbg-ppa dataset only has edge features. To utilize edge features, we modified the message passing rule such that the message includes the concatenated node feature, which in this case is the calculated distance value between each node and the target node pair, and edge features. In our method and SEAL, where a graph level pooling is needed, we use the pooling method proposed in DGCNN (Zhang et al. (2018a)), which composes of a sort pooling layer followed by a two-layer convolution network. In this experiment, we use the sort pooling layer to pull out the sorted features from the top 10 nodes (sorted by the largest value of each node).
For our method and SEAL, the prediction of link probability starts with a 32-length long graph embedding vector. After the pooling layer, we send the graph embedding to a dense layer with 32 neurons and another dense layer to generate the output. For GAE, the prediction of link probability requires the node level features of the two target nodes. In our implementation, after the GNN step, we simply look up the node embedding vectors for the two target nodes and concatenate the two vectors. The merged embedding is then sent to the same MLP as described earlier to generate the prediction.
In our methods, we use a three layer MLP with 32 neurons as the discriminator for the graph embedding. The input of the discriminator is the concatenation of two random graph embedding and the target of the discriminator is to predict whether these two embedding come from the same domain. In the actual implementation, because we train the model using a mini-batch of 32, this could be easily done by separating one batch of 32 embedding vectors to two groups of 16 vectors and comparing between these two groups.
In addition, we implement a 20 iteration warm-up stage for both our method and SEAL. In the warmup stage, the models are trained without discriminator. The purpose of setting up the warm-up stage is to make it easier to train the adversarial model. As reported in literature, adversarial models are harder to train. If we train the model and the discriminator together from the very beginning, a bad initialization on the model may make it too difficult for the model to confuse the discriminator, which will lead to the “model collapsing”. A 20 step warm-up stage may not always bring the model to the best spot but it offers a good starting point in most cases.
All the models are trained with a learning rate at 0.0001 with L2 regularization at 0.001 for 250 epochs. The discriminators are trained with a learning rate at 0.00001. All the experiments are trained and tested on a NVIDIA V100 card.
4.4 EXPERIMENT RESULTS
In our proposed method, α is an important hyperparameter because it controls the impact of the discriminator on the model. In this experiment, we select α from 1, 3, and 5 and we compare the results with all of our baseline methods. The results are displayed in Table 1 and Figure 2.
First, we observe that compared with the vanilla SEAL method, in most cases, the proposed adversarial methods yield better performances over underrepresented domains. The average improvement is around 1.5% but the levels of improvement vary across different datasets. In the ogbn-products and PPI dataset, we observe much better performance from the adversarial models but in the ogbg-ppa dataset, the improvement is not very significant. We think the reason is that the feature distributions in the ogbg-ppa dataset are highly similar across domains. In this case, the impact of the lack of training data for one specific domain is very small. Therefore, the discriminator here cannot generate any benefits. We will discuss the details of our reasoning later.
The GAE method did not do well under this experiment. Unlike SEAL, GAE lacks the key node feature to describe the distance between each node to the target link. Under this few-shot learning setting, the limited number of nodes for training further restricts GAE’s ability to efficiently train the model.
The performances of the heuristic methods are low as well. One possible explanation is that the link prediction task in these three experiment dataset heavily depends on node/edge features. Since all
three heuristic methods only use graph topological structure data, they lack enough information to make good predictions.
Impact of Adversarial Training on Graph Embedding
To understand the real impact of the adversarial training on graph embedding, we use t-SNE (van der Maaten & Hinton (2008)) to reduce the dimension of graph embedding and visualize them in 2-D space. Figure 3 shows the changes of graph embedding for all 24 domains in PPI. From the figure, we can see the dots generated by the adversarial method are more evenly distributed across the 2-D space and this observation further confirms that our proposed method is working as expected.
In addition, by inspecting the t-SNE plot (Figure 4) for the ogbg-ppa dataset, we find that with the baseline SEAL method, the distribution of embedding points among different domains in ogbg-ppa are already evenly distributed. It shows that there are few inter-domain differences in this dataset. Although the protein graphs come from different organisms, it could be the case that all the graphs are assembled in a similar way, so they are topologically similar. It could also be possible that the 7-dimension edge feature extracted for this dataset are not reflecting domain differences.
In fact, comparing Figure 3 and Figure 4 shows that the t-SNE visualization of the graph embedding vector is a good tool for machine learning practitioners to decide when to adopt the proposed
approach. If there are distinguishable domain differences across domains based on the t-SNE plot, the proposed method will likely yield some improvements.
Training Curves, Run time and Impact of α
We also look into the impact of the adversarial design on the training curves. Figure 5 shows the smoothed training curves of 10 experiments with the ogbn-products dataset on domain 2. The shaded area represent the standard error. Note that we excluded the performances during the 20-step warmup stage to zoom on the most important area. As presented in Figure 5, the baseline performance quickly drops after 50 iterations even with the use of L2 regularization. This is mostly due to the fact that the evaluation domain is underrepresented in training data (recall that in the training data, we have 1 shot of data in the evaluation domain and 10 shots of data for every other domain). By enforcing the graph embedding from different domains to be similar using adversarial training, the graph embedding generated by our method is more likely to work well with the evaluation domain. Therefore we see the performances from all three adversarial experiments reach better best performances compared with the baseline SEAL method.
In addition, by increasing the value of α, we increase the impact of the discriminator of the target model. In this case, α = 3 seems to provide the best performance. However, if we look at the average best performances, the best α is in fact 5. A closer look into each individual training curve tells us that some of the best models were trained when α = 5. However, the large α value also makes the model less stable and it’s more likely to see the model collapse during the training process. Therefore, we conclude that when domain differences exist, higher impact of the discriminator helps the model reach better performances but it makes it more difficult to train. Eventually, there is a sweet spot to keep the balance between performance and model stability.
Figure 5 also tells us that compared with the baseline SEAL method, our method takes more steps to converge. Increasing the value of α increases the steps to converge as well. At the same time, it also takes more time to train the adversarial model because the computation of graph embedding needs to happen two times to collect gradients to train the target model and the discriminator. On average, it takes 1.134s to finish one training step for the ogbn-products dataset using baseline SEAL on an NVIDIA V100 graphic card, but it takes 1.406s (24.0 % increase) to train the network using adversarial method.
5 CONCLUSION
In this work, we propose an adversarial training-based approach to solve the few-shot graph link prediction problem. Compared with the state-of-arts method and other baselines, our method achieves better performance on underrepresented domains, especially when domain variance is not trivial. We further examine the changes of the graph embeddings using t-SNE and reveal the root of this improvement. At the same time, by showing the t-SNE visualizations for cases when the proposed method doesn’t provide benefits, we show that t-SNE is a very efficient tool for practitioners to determine benefit of our method. | 1. What is the main contribution of the paper in the field of few-shot graph link prediction?
2. What are the strengths of the proposed approach, particularly in terms of domain-invariance?
3. What are the weaknesses of the paper regarding its motivation, definition of "domain", and application of adversarial learning?
4. How does the reviewer assess the novelty and effectiveness of the proposed method compared to baselines? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a method for few-shot graph link prediction by using a domain discriminator. The motivation is to learn domain-invariant graph-level representations. The authors also introduce the concept of “shot” in the graph. The proposed method outperforms baselines on three different datasets.
Review
Strength:
This paper introduces adversarial learning (which is widely used in UDA methods) to few-shot graph link prediction problems to learn domain-invariant representations.
Weakness: 1. The motivation of this work should be further justified. In few-shot learning, we usually consider how to leverage a few instances to learn a generalizable model. This paper defines and creates a few-shot situation for graph link prediction, but the proposed method does not consider how to effectively use “few-shot” and how to guarantee the trained model can be generalized well to new tasks with 0/few training steps. 2. The definition of “domain” in this paper is unclear. For instance, why select multiple domains from the same single graph in ogbn-products? Should we consider the selected domains as “different domains”? 3. The application of adversarial learning in few-shot learning is confusing. Adversarial learning in domain adaptation aims to learn domain-invariant representations, but why do we need such kind of representation in few-shot learning? |
ICLR | Title
On the Dynamics of Training Attention Models
Abstract
The attention mechanism has been widely used in deep neural networks as a model component. By now, it has become a critical building block in many state-of-the-art natural language models. Despite its great success established empirically, the working mechanism of attention has not been investigated at a sufficient theoretical depth to date. In this paper, we set up a simple text classification task and study the dynamics of training a simple attention-based classification model using gradient descent. In this setting, we show that, for the discriminative words that the model should attend to, a persisting identity exists relating its embedding and the inner product of its key and the query. This allows us to prove that training must converge to attending to the discriminative words when the attention output is classified by a linear classifier. Experiments are performed, which validate our theoretical analysis and provide further insights.
1 INTRODUCTION
Attention-based neural networks have been broadly adopted in many natural language models for machine translation (Bahdanau et al., 2014; Luong et al., 2015), sentiment classification (Wang et al., 2016), image caption generation (Xu et al., 2015), and the unsupervised representation learning (Devlin et al., 2019), etc. Particularly in the powerful transformers (Vaswani et al., 2017), attention is its key ingredient.
Despite its great successes established empirically, the working mechanism of attention has not been well understood (see Section 2). This paper sets up a simple text classification task and considers a basic neural network model with the most straightforward attention mechanism. We study the model’s training trajectory to understand why attention can attend to the discriminative words (referred to as the topic words). More specifically, in this task, each sentence is treated as a bag of words, and its class label, or topic, is indicated by a topic word. The model we consider involves a basic attention mechanism, which creates weighting factors to combine the word embedding vectors into a “context vector”; the context vector is then passed to a classifier.
In this setting, we prove a closed-form relationship between the topic word embedding norm and the inner product of its key and the query, referred to as the “score”, during gradient-descent training. It is particularly remarkable that this relationship holds irrespective of the classifier architecture or configuration. This relationship suggests the existence of a “synergy” in the amplification of the topic word score and its word embedding; that is, the growths of the two quantities promote each other. This, in turn, allows the topic word embedding to stand out rapidly in the context vector during training. Moreover, when the model takes a fixed linear classifier, this relationship allows rigorous proofs of this “mutual promotion” phenomenon and the convergence of training to the topic words.
Our theoretical results and their implications are corroborated by experiments performed on a synthetic dataset and real-world datasets. Additional insights are also obtained from these experiments. For example, low-capacity classifiers tend to give stronger training signals to the attention module. The “mutual promotion” effect implied by the discovered relationship can also exhibit itself as “mutual suppression” in the early training phase. Furthermore, in the real-world datasets, where perfect
delimitation of topic and non-topic words does not exist, interesting training dynamics is observed. Due to length constraints, all proofs are presented in Appendix.
2 RELATED WORKS
Since 2019, a series of works have been published to understand the working and behaviour of attention. One focus of these works pertains to understanding whether an attention mechanism can provide meaningful explanations (Michel et al., 2019; Voita et al., 2019; Jain & Wallace, 2019; Wiegreffe & Pinter, 2019; Serrano & Smith, 2020; Vashishth et al., 2020). Most of these works are empirical in nature, for example, by analyzing the behaviours of a well-trained attention-based model (Clark et al., 2019), or observing the impact of altering the output weights of the attention module or pruning a few heads (Michel et al., 2019; Voita et al., 2019), or a combination of them (Jain & Wallace, 2019; Vashishth et al., 2020). Apart from acquiring insights from experiments, Brunner et al. (2019) and Hahn (2020) show theoretically that the self-attention blocks lacks identifiability, where multiple weight configurations may give equally good end predictions. The non-uniqueness of the attention weights therefore makes the architecture lack interpretability.
As a fully connected neural network with infinite width can be seen as a Gaussian process (Lee et al., 2018), a few works apply this perspective to understanding attention with infinite number of heads and infinite width of the network layers (Yang, 2019; Hron et al., 2020). In this paper, we restrict our study to the more realist non-asymptotic regime.
3 PROBLEM SETUP
Learning Task To obtain insights into the training dynamics of attention models, we set up a simple topic classification task. Each input sentence contains m non-topic words and one topic word indicating its topic. Note that a topic may have multiple topic words, but a sentence is assumed to include only one of them. Assume that there are J topics that correspond to the mutually exclusive topic word sets T1, T2, · · · , TJ . Let T = ⋃J j=1 Tj be the set of all topic words. The non-topic words are drawn from a dictionary Θ, which are assumed not to contain any topic word.
The training set Ψ consists of sentence-topic pairs, where each pair (χ, y) is generated by (1) randomly pick a topic y ∈ {1, 2, · · · , J} (2) pick a topic word from set Ty and combine it with m words drawn uniformly at random from Θ to generate the sentence (or the bag of words) χ. In this task, one aims to develop a classifier from the training set that predicts the topic y for a random sentence χ generated in this way.
We will consider the case that |Θ| >> |T|, which implies that a topic word appears much more frequently in the sentences than a non-topic word.
Attention Model For this task, we consider a simple attention mechanism similar to the one proposed by Wang et al. (2016). Each word w is associated with two parameters: an embedding νw ∈ Rd and a key κw ∈ Rd ′ . Based on a global query q ∈ Rd′ , the context vector of sentence χ is computed by
ν̄(χ) = ∑ w∈χ νw exp(qTκw) Z(χ) , where Z(χ) = ∑ w′∈χ exp(q
Tκw′). Then ν̄(χ) is fed into a classifier that predicts the sentence’s topic in terms of a distribution over all topics.1
Denote the loss function by l(χ, y). Our upcoming analysis implies this attention model, although simple, may capture plenty of insight in understanding the training of more general attention models.
Problem Statement Our objective is to investigate the training dynamics, under gradient descent, of this attention model. In particular, we wish to understand if there is an intrinsic mechanism that allows the attention model to discover the topic word and accelerates training. Moreover, we wish to investigate, beyond this setup, how the model is optimized when there is no clear delimitation between topic and non-topic words, as in real-world data.
1The condition that the attention layer directly attends to the word embeddings merely serves to simplify the analysis in Section 4 but this condition is not required for most results presented in Sections 4 and 5. More discussions are given in Appendix A in this regard.
4 THEORETICAL ANALYSIS
It is common to fix some parameters when we train a model with limited resources. Also Lemma 1. Assume q 6= 0 when initialized. Fixing it does not affect the attention block’s capacity.
Thus, our upcoming discussion focuses on the case in which the query is fixed. Doing so also allows us to establish a closed-form expression connecting the word’s embedding and the inner product of its key and the query. In Appendix B, extra discussions and experimental results reveal that the trainability of the query does not affect the fundamental relationship we are about to present.
For a topic word t, let Ψt denote the training samples involving it. Then, by gradient descent,
∆νt = τ |Ψ| ∑
(χ,y)∈Ψt
∇ν̄(χ)l(χ, y) exp(qTκt)
Z(χ) (1)
∆κt = τ |Ψ| ∑
(χ,y)∈Ψt
q(νt − ν̄(χ))T ∇ν̄(χ)l(χ, y) exp(qTκt)
Z(χ) , (2)
where τ denote the learning rate. As it will turn out, an important quantity in this setting is the inner product qT kw of query q and the key kw, which we denote by sw, and refer to it as the score of the word w.
Denoting vw = ||q||2νw, η = τ ||q||2, v̄(χ) = ∑ w∈χ exp(sw) Z vw, and h(v̄(χ); y) = ∇ν̄(χ)l(χ, y), for a topic word t, the dynamics simplifies to
∆vt = η |Ψ| ∑
(χ,y)∈Ψt
h(v̄(χ); y) exp(st)
Z(χ) (3)
∆st = η |Ψ| ∑
(χ,y)∈Ψt
(vt − v̄(χ))T h(v̄(χ); y) exp(st)
Z(χ) . (4)
In the rest of the paper, whenever we refer to the embedding of word t, we actually mean vt not νt.
Our analysis assumes the word embeddings are sampled i.i.d. from a distribution with mean zero and variance σ 2
d , where σ 2 is assumed close to zero. The word keys and the query are also sampled from
zero mean distributions with a possibly different variance. We assume that this variance is so small that the initial word scores are approximately zero. This assumption of the initial configurations corresponds to the attention model starting as a word-averaging model, and allows us to investigate how the model deviates from this initial setting with training. We also assume the derivative h(v̄(χ); y) of ` is Lipschitz continuous in v̄(χ) throughout training. Further the assumption in Section 3 that the number of non-topic words |Θ| is much larger than the number of topic words |T| implies that with a sufficient number of training samples, the occurrence rate of a topic word is significantly higher than the non-topic ones. This then justifies the following assumption we will use throughout our analysis. Assumption 1. The scores and the embeddings of the non-topic words are nearly unchanged compared to their counterparts for the topic words.
Hence, our upcoming analysis will treat the scores and embeddings of the non-topic words as constants. Assumption 1 will be validated by experimental results presented in Section 5.
By selecting a sufficiently small η, we can take the gradient-descent updates in Eq (3) and Eq (4) to its continuous-time limit and get2
dvt dt = η |Ψ| ∑
(χ,y)∈Ψt
h(v̄(χ); y) exp(st)
Z(χ) (5)
dst dt = η |Ψ| ∑
(χ,y)∈Ψt
(vt − v̄(χ))T h(v̄(χ); y) exp(st)
Z(χ) . (6)
2Reversely, Eq (3) is a discretized approximation of Eq (5): vt(t+1)−vt(t) = ∫ t+1 t dvt(t ′) dt′ dt ′ ≈ 1· dvt(t) dt = ∆vt(t) . The approximation becomes accurate if vt(t + 1) is close to vt(t), which can be achieved by choosing a sufficiently small η. Likewise, Eq (4) is a discretized approximation of Eq (6).
We can then characterize the update of the score and the embedding of a topic word as a continuoustime dynamical system stated in Lemma 2. The same technique has been used to analyze the training of neural networks in other contexts (Saxe et al., 2014; Greydanus et al., 2019).
Lemma 2. For sufficiently small η and σ2, the score st and embedding vt of topic word t satisfy
dvt dt = η|Ψt| |Ψ|
〈 h(v̄(χ); y) exp(st)
Z(χ) 〉 Ψt , (7)
dst dt = [( vt − 〈v̄(χ \ t)〉Ψt )T dvt dt ]〈 exp(st) + Z(χ \ t) Z(χ \ t) 〉−1 Ψt , (8)
where Z(χ \ t) = ∑w∈χ\{t} exp(sw), v̄(χ \ t) = ∑w∈χ\{t} vw exp(sw)Z(χ\t) , and 〈 · 〉Ψt denotes taking sample mean over the set Ψt.
Eq (7) implies the speed of moving vt along the direction of 〈h(v̄(χ); y)〉Ψt is controlled by the attention weight exp(st)Z(χ) . Eq (8) shows that vt increases if and only if vt has a greater projection on 〈h(v̄(χ); y)〉Ψt than the weighted average of the non-topic word counterparts. Consider a simplified case where 〈h(v̄(χ); y)〉Ψt is fixed. Since the change of vt is much faster than the non-topic word counterparts, vt will have a larger projection on 〈h(v̄(χ); y)〉Ψt after a few epochs of training. Then st increases as well as its attention weight, which in turn speeds up the extension of the embedding vt. This observation reveals a mutual enhancement effect between the score increment and the embedding elongation. In fact such an effect exists in general, as stated in the theorem below, irrespective of whether 〈h(v̄(χ); y)〉Ψt is fixed. Theorem 1. In the setting of Lemma 2, from epoch t0 to t1, the topic word score st and its embedding vt satisfy [
st(t) + exp(st(t))
〈 1
Z(χ \ t) 〉 Ψt ]t1 t0 = [ 1 2 ||vt(t)− 〈v̄(χ \ t)〉Ψt || 2 2 ]t1 t0 . (9)
Following from Lemma 2, this theorem implies a positive relationship between the topic word score and the distance between vt and the non-topic word embedding average 〈v̄(χ \ t)〉Ψt . Remarkably this result makes no reference to 〈h(v̄(χ); y)〉Ψt , hence independent of it. This implies the identity in Eq (9) holds irrespective of the choice and setting of the classifier. Theorem 1 further implies a score and embedding norm (“SEN” in short) relationship for the topic words:
Corollary 1. In the context of Theorem 1, by setting t0 = 0 and t1 = t, Eq (9) is reduced to ||vt(t)||2 = √ 2 ( st(t) + exp st(t)
m − 1 m
) , (10)
The corollary indicates that ||vt(t)||2 is monotonically increasing with st(t). So, st increases if and only if the point vt departs from its initial location. That is, if the norm of the topic word embedding increases, it will be attended to. This result is independent of the configuration of all other network layers. Thus, if 〈h(v̄(χ); y)〉Ψt has a gradient field that pushes vt away from its original location, the topic word is expected to be attended to. This statement can be made precise, as in Theorem 2, when the model uses a linear classifier.
Theorem 2. Assume the model has a fixed classifier in the form c(v̄(χ)) = softmax(UT v̄(χ)), where the columns of U are linearly independent, and the model is trained using gradient descent with the cross-entropy loss. As training proceeds, the model will attend to the topic word in every input sentence and have its training loss approach zero.
It is notable that the theorem holds broadly for any arbitrary fixed linear classifier (subjective to the mild linear independence constraint of its parameter U ). Additionally, we anticipate that this result holds for a much wider family of classifiers including trainable and even nonlinear ones. But rigorous proof appears difficult to obtain in such settings, and we will corroborate this claim in an experimental study in Section 5.
To sum up, in this section, we have shown two main results: (a) there is a closed-form positive relationship, the SEN relationship, between the topic word score and its embedding norm, which is independent of the configuration of the classifier. (b) the model, equipped with a fixed linear classifier stated in Theorem 2, can be trained to have all topic words attended to.
5 EXPERIMENTAL STUDY
In this section, we first test our model on an artificial dataset generated through the procedure introduced in Section 3. The test corroborates our theoretical results and validates their assumptions. Our test results suggest that the attention mechanism introduces a synergy between the embedding and the score of topic words.
Another experiment is performed on the real datasets SST2 and SST5 (Socher et al., 2013). The experiment results suggest that the SEN relationship of topic words holds at least in initial training stages. As training proceeds, some words appear to deviate from the theoretical trajectories. Further analysis of this behaviour provides additional insights into the attention model’s training dynamics on real-world datasets, often possessing a much more complex structure as well as rich noise. We performed all experiments using PyTorch (Paszke et al., 2017).
We performed our experiments on three models, Attn-FC, Attn-TC and Attn-TL, having the same attention block but different classifiers. The first two have the classifier in form c(v̄(χ)) = softmax(UT v̄(χ)) and the last in form c(v̄(χ)) = softmax(UT2 ReLu(U T 1 v̄(χ)+b1)+b2). Except that the U in Attn-FC is fixed, other parameters of the three models are trainable and optimized using the cross-entropy loss.
Since a real-world dataset does not have a topic word as the sentence topic indicator, we introduce a word “topic purity” measurement to facilitate our discussion on the experiments performed on SST2. Let δ+w and δ
−(w) respectively denote the portions of the positive and negative sentences among all training samples containing word w. Then the topic purity of w is δ(w) = |δ+(w)− δ−(w)|. If δ(w) = 1, w is either a pure positive or negative topic word. If δ(w) = 0, δ+(w) = δ−(w) = 0.5, which implies w has a completely random topic correspondence.
5.1 EXPERIMENTS ON SYNTHETIC DATASETS
The artificial dataset, consisting of 800 training and 200 test samples, is generated through the procedure introduced in Section 3. The dataset has four topics, and each contains two topic words. There are 20 non-topic words per sentence and the non-topic word dictionary size M = 5, 000.
Our experiments use the same embedding dimension 15 for all three models. Regarding the classifiers, Attn-FC and Attn-TC adopt U ∈ R15×4, while Attn-TL takes U1 ∈ R15×10, b1 ∈ R10, U2 ∈ R10×4 and b2 ∈ R4. For the validation of Theorem 2, the columns of U in Attn-FC are set to be orthonormal and thus linear independent. Unless otherwise stated, the scores are set to zero and the embeddings are initialized by a normal distribution with mean zero and variance σ 2
d = 10 −6. We trained the
models using gradient descent with learning rate η = 0.1 for 5K epochs before measuring their prediction accuracy on the test samples. When training is completed, all three models achieve the training loss close to zero and the 100.0% test accuracy, which implies the trained models perfectly explain the training set’s variations and have a good generalization on the test set.
Verification of Assumption 1 and validation of Corollary 1 and Theorem 2. We repeated the experiments for five runs and plotted the empirical score distributions of the non-topic and topic words of the three well-trained models with their 95% confidence intervals in the first two graphs of Fig 1. Compared to the topic words, the scores of the non-topic words are nearly unchanged throughout the entire training process. Likewise, the next two plots show the embedding norms of the non-topic words are nearly constant, too. This implies Assumption 1 indeed holds. Fig 2 plots the empirical and the theoretical SEN curves of a randomly picked topic word for the three models, where the theoretical curve has the expression stated in Eq (10). The coincidence of the empirical and the theoretical curves in all three models validates the SEN relationship stated in Eq (10) and
its independence of the later layers.3 Moreover, Fig 1 (left two) shows the scores of the topic words exceed the non-topic word counterparts by two orders of magnitude, which implies the topic words are attended to in a well-trained model. As we have reported earlier, Attn-FC has the training loss roughly zero when the training is completed, Theorem 2 is confirmed.
Lower-capacity classifiers result in stronger attention effects. The comparison, among the SEN distributions of the topic words in the three models, implies that a lower-capacity classifier leads to greater topic word SEN, which means a more drastic attention decision. This happens because the classifier of a larger capacity can explain more variations of the sample distribution and has more freedom to accommodate and absorb the correcting gradient signals. As a result, the attention layer receives a weaker gradient on average, which makes the embeddings of the topic word extend less from the original point. Thus, as implied by Eq (10), the magnitudes of the scores of the topic words are dampened, and therefore a weaker attention effect will be expected. This observation hints that if we know attending to the right words can explain most of the variations in sample distributions, we should consider a low capacity classifier (or later layers in general). Alternatively, we may also freeze the classifier in the initial stage of a training process, forcing the attention layer to explain more variations. Remarkably, all these modifications do not affect the relationship stated in Eq (10).
Synergy between score growth and embedding elongation in topic words. Eq (10) implies a positive SEN relationship of the topic word. That is, a larger topic word embedding norm results in a larger score (and thus a larger attention weight), which in turn makes the embedding extend faster. To corroborate this claim, we performed an ablation study by considering two variants of Attn-TC. The first has the scores frozen (referred as Attn-TC-KF) and the second has the embeddings fixed (referred as Attn-TC-EF). In this experiment, the embeddings of the models are initialized by a normal distribution of mean zero and variance σ 2
d = 0.1. We trained all three models by gradient descent for 60K epochs with learning rate η = 0.1, which are sufficient for all three models to fully converge. All three trained models reached 100% test accuracy.
3In Appendix B, the experiments with trainable queries are also implemented. The results indicate that the trainability of queries do not affect the positive SEN relationship. Besides, the query fixed model has very similar training dynamics to the one with a trainable query and a large initial norm.
The first three graphs of Fig 3 describe the evolution of the three models in the first 3K training epochs. For a randomly picked topic word, the first plot shows its score in Attn-TC grows faster than the one in Attn-TC-EF. Note that the score in Attn-TC-EF finally surpasses the one in Attn-TC because Attn-TC has converged at around 1K epochs. Likewise, the word embedding norm in Attn-TC increases more rapidly than the one in Attn-TC-KF before Attn-TC converges. The observations imply the attention introduces a mutual enhancement effect on training the topic word’s score and its embedding, which makes Attn-TC enjoy the fastest training loss drop as shown in the third plot.
The mutual enhancement could become the mutual diminution in the early training stage if the initial embedding of the topic word has a negative projection on the direction that it will move along. This effect can be precisely characterized by Eq (8). Assume the embedding of a topic word is initialized to have a smaller projection on the gradient, passed from the classifier, than the average of the non-topic words. The reversed order of the projections makes the score of the topic word decrease as its embedding has a “negative effect” on the training loss compared to the average of the non-topic word embeddings. This will, in turn, impede the elongation of the embedding vector or even make it shrink (see the last two plots of Fig 3). The “negative effect” cannot last long because the topic word embedding moves along the gradient much faster than the non-topic words due to its high occurrence rate in the training samples. By Eq (8) again, dstdt will finally become positive. That is, the score of the topic word starts to increase and its attention weight will surpass the one of the word-averaging model (see the second last plot of Fig 3). Then, we start to observe the mutual enhancement effect, which is indicated in the last plot: the increase speed of the Attn-TC’s embedding norm exceeds the Attn-TC-KF’s since around the 370-th epoch.
5.2 EXPERIMENTS ON SST2 AND SST5
The second part of the experiment is performed on datasets SST2 and SST5, which contain movie comments and ratings (positive or negative in SST2 and one to five stars in SST5). For simplicity, we limit our discussion on Attn-FC and Attn-TC using the same configurations of our previous experiments except that the embedding dimension is set to 200. Remark that our goal is not to find a state-of-the-art algorithm but to verify our theoretical results and further investigate how an attention-based network works.
For both SST2 and SST5, we trained the two models by gradient descent with learning rate η = 0.1 combined with the early stopping technique (Prechelt, 2012) of patience 100. As PyTorch requires equal length sentences in a batch, we pad all the sentences to the same length and set the score of the padding symbol to the negative infinity. Under this configuration, the trained Attn-FC and Attn-TC reached 76.68% and 79.59% test accuracy on SST2 and 38.49% and 40.53% on SST5.
Validation of Corollary 1. As the true topic words are unknown in a real dataset, we checked the words of the largest fifty scores after the training is completed. We observed that most of the words have their SEN curves close to our theoretical prediction. We picked two words for each model-dataset combination and plotted the curves with their theoretical counterparts in Fig 4.
The competition of two topic word candidates of various occurrence rates and topic purity. To better understand how the attention block works, we investigated the case when the empirical and the theoretical SEN curves disagree. We limit our discussion on the model trained on SST2.
For both Attn-FC and Attn-TC, we noticed that there are mainly two types of deviations, as shown in the first two columns of Fig 5. In the first one, the theoretical curves overestimate the score growth of powerful in terms of the embedding norm, while best shown in the second column experiences a score drop combined with a failed theoretical prediction in the final training stage. These two types of the disagreement in fact occur in pairs, which is caused by a pair of words that one of them frequently appears in the training samples but has a low topic purity, while the other has the opposite.
Regarding the (powerful, best) pair, best appears in 128 training samples while 103 of them are positive. In comparison, powerful appears in the 36 training samples, and all of them are positive. The large difference in the number of samples makes the embedding norm of best extend much faster than powerful in the initial training process (Fig 5, third column). As the positive SEN relationship implies, a quicker embedding norm increase of best results in its faster score growth. Therefore, best will be more attended to ( Fig 5, last column), which thus further accelerates its SEN increase (this is the synergy relationship between the embedding and score that has been demonstrated in the ‘ablation’ study). This process does not stop until the gradient from the classifier diminishes because of its low topic purity (which is similar to the case when the label smoothing (Szegedy et al., 2016) is applied for alleviating the overfitting problem). In contrast, powerful is less trained initially due to its low occurrence rate. But its high topic purity makes the direction of the gradient stable, in which its embedding will steadily elongate. The elongation will, at last, let the embedding have a greater projection on the gradient vector than the average of the other words appearing in the same sentence. Thus, the score of powerful starts to increase as shown by Eq (8) and plotted in the second last column of Fig 5. In contrast, as the gradient magnitude drops, the embedding of best will extend in a decreasing speed; and its projection on the gradient, passed from the classifier, will finally be surpassed by the words co-occurring with it but having a higher topic purity (like powerful). Thus, its score starts to drop eventually.
The dynamics of topic purity of attended words The analysis of the inconsistency between the empirical and theoretical SEN curves hints that the disagreement are strongly related to the word’s topic purity and its occurrence rate. To better characterize their dynamics, for a word w, let Aw(t)
be the list, at epoch t, that records the attended word (or of the largest score) in the sentences containing w. If multiple words in a sentence have the same score, randomly pick one of them. Note that Aw(t) may contain repeating words as a word could be attended in multiple sentences. We selected w to be the words having the largest five scores in the well-trained Attn-FC and Attn-TC, respectively. At various epoch t, Fig 6 plots how the average of topic purity δ(w′) evolves for w′ ∈ Aw(t) as well as the average number of occurrence in the training samples. For both models, they initially attend to the words that mostly have low topic purity with a high occurrence rate. As the training proceeds, the average topic purity of the attended words increases while the average occurrence rate drops. At the end of the training, almost all the attended words have a close-to-one topic purity. Fig 7 shows the evolution of the average topic purity and the average occurrence rate of the attended words over the entire training set. While a similar changing pattern can be observed, the average topic purity is lower than the one presented in Fig 6 when the training is completed. We argue that this happens because some sentences do not have any high topic purity words or their high topic purity words have a too low occurrence rate to be sufficiently trained.
6 CONCLUSION
This paper investigated the dynamic of training a series of attention-based bag-of-word classifiers on a simple artificial topic classification task. We have shown a persisting closed-form positive SEN relationship for the word to which the model should attend to. This result is independent of the configurations of the later layers. Through the result, we have proved that the model must converge in attending to the topic word with the training loss close to zero if the output of the attention layer is fed into a fixed linear classifier. A list of experiments has confirmed these results.
The experimental results indicate that the attention block intends to make a more drastic decision if its later layers have a lower capacity. This hints the classifier’s limited capacity may help if “selecting” the right word explains most of the variations in the training samples. An ablation study shows a synergy between the topic word score’s growth and its embedding elongation, leading to a faster training loss drop than the fixed score and fixed embedding variants. Besides, we have shown that this “mutual promotion” effect can also exhibit itself as “mutual suppression” in the initial training stage.
We investigated the competition of two topic word candidates with large differences in the topic purity and the occurrence rate in the training samples. The words of a higher occurrence rate but possibly low topic purity are more likely to be attended to initially. However, as the training proceeds, the attended words are gradually replaced by those of higher topic purity.
A THE RESULTS OF THE MODEL THAT THE ATTENTION BLOCK ATTENDS TO A CNN LAYER
The main text focuses on a model that has the attention block directly attends to word embeddings. Such design simplifies our analysis but should not be considered as a pre-condition to keep our results valid. In particular, the positive SEN relationship generally holds regardless of other components of the network, which can be justified as follows. There are two correlated sources of gradient signals, one back-propagates from the score to update key and query, the other back-propagates from the classifier loss to update word embedding. The correlation governs the positive relationship between score and embedding norm. Although the integration of other modules makes the analysis harder, the relationship should persist. Hence, all the results depending on this relationship keep valid.
We empirically verify our claims by implementing an experiment similar to the one discussed in the main text. We modified Attn-TC (named Attn-TC-CNN) by adding two parallel CNN layers to respectively process the word embeddings and the keys before feeding them into the attention block. Then, we construct an analogous data generation process introduced in Section 3. Finally, we empirically show that Assumption 1 and the positive SEN relationship still hold.
The Attn-TC-CNN has the same configurations as the Attn-TC (introduced in Section 5.1) except that we added two parallel CNN layers to preprocess the word embeddings and the keys. Regarding the CNN layer processing the word embeddings, it has the kernel of size d × 2 and stride 1. We used d kernels to keep the word embedding dimension unchanged. So, for two consecutive words in a sentence, the CNN mixes their embeddings and produce a new one. Given that the sentence has m+ 1 words, the input embedding matrix has shape d× (m+ 1) and the output has shape d×m. Likewise, regarding the keys, the CNN layer processing them has the kernel size d′ × 2 and stride 1. And there are d′ kernels in total.
Consider a classification problem containing two topics A and B. The sentences of the two topics are generated by two Markov chains, respectively, which are constructed as follows:
1. Let Li (i = A,B) be two mutually exclusive sets of ordered word pairs. The word pairs do not contain repeating words.
2. For i = A,B:
(a) Initialize Markov Chain (MCi) of words in the dictionary such that from a word, there is a uniform probability of moving to any words (including itself) in one step.
(b) Group the word pairs in Li according to the leading words. For each group, let s denote the shared leading word and ei (i = 1, 2, · · · , n) the second. Set the probability of moving from s to ei in one step be 1n and those to the rest zero.
We call the word pairs in Li (i = A,B) the topic word pairs. For each pair of words, a new embedding and a new key are generated by feeding their original embeddings and keys into the CNNs. We refer to the new embedding as the topic word-pair embedding and the new key as the topic word-pair keys.
Likewise, for any other word pairs, the new generated embeddings and keys are referred to as the non-topic word-pair embeddings and keys.
Assume the training dataset contains sentences of length m + 1. We generate it by repeating the following procedure:
1. uniformly sample a topic i ∈ {A,B}. 2. uniformly sample a word pair from Li and let the first word be the starting word.
3. sample another m words by running the Markov process for m times.
In our experiments, the word dictionary has 200 words, and so there are 40, 000 word pairs. The sentence length is set to 10. For i = A,B, each Li contains two randomly picked word pairs as the topic word pairs. Note that we have ensured that LA and LB are mutually exclusive. We used 4, 000 samples for training and 1, 000 for testing. The model was trained for 3, 000 epochs and achieved 99.89% test accuracy.
We repeated the experiments for ten runs and plotted the distributions along with the 95% confidence interval of the scores and the embedding norms of the topic and the non-topic word pairs in Fig 8. Note that, the word pair embeddings and the keys are generated from the CNN layers. So the initial embeddings and the scores are not very closed to the origin. To facilitate the verification of Assumption 1, we centered them by subtracting their initial values before we plot their distributions. From Fig 8, we observe that the non-topic word pair scores and the embeddings are nearly unchanged in comparison to their counterparts of the topic ones. Therefore, we have shown that Assumption 1 is largely held even we have added CNN layers to process the word embeddings and the keys before feeding them into the attention block.
Randomly picking a topic word pair, we plotted its empirical and theoretical SEN curves in ten runs in Fig 9. The figure shows that the positive SEN relationship holds even if the attention layer attends to other layers instead of the word embeddings directly. Therefore, all the results due to the relationship keep valid.
B DISCUSSION AND EXPERIMENTAL RESULTS OF THE MODELS WITH A TRAINABLE QUERY
In Section 4, we assumed a fixed and non-trainable query which allows the derivation of a clean closed-form "SEN" relation. But it is worth emphasizing that the positive relationship between the score and the embedding norm in fact exists regardless of the trainability of the query. As we have mentioned in Appendix A, there are two correlated sources of gradient signals, one back-propagates from the score to update key and query, the other back-propagates from the classifier loss to update word embedding. This correlation governs the positive relationship between score and embedding norm. Whether the query is trainable does not alter the existence of this correlation, although a trainable query makes the analysis more difficult. In particular, when the query norm is large, the update of query is relatively negligible; thence, the training behaviour is similar to having a fixed query.
To verify our claims, we reimplemented the experiments introduced in Section 5.1 with the same configurations except that the query is trainable. In Fig 10, we plot the empirical and the theoretical SEN curves of a randomly picked topic word by training Attn-FC, Attn-TC and Attn-FC with a trainable query. We initialize the entries of the query vector by a normal distribution N(0, σ2). From left to right, σ2 increases as well as the initial query norm. We observe that regardless of how the query is initialized, the positive SEN relationship always preserves. Moreover, as the initial norm increases, the empirical curve approaches the theoretical curve having the expression in Eq (10). As we have discussed, this asymptotic approach happens since an increasing initial norm of the query makes its change negligible during the training process compared to its already big enough norm.
C PROOFS OF THE RESULTS IN SECTION 4
Proof of Lemma 1. Assume there are in total N words in the dictionary (including both the topic and the non-topic words). Sample the keys of the N words arbitrarily to generate K ∈ Rd′×N with the key vectors as its columns. Randomly pick q ∈ Rd′ as a query. To prove the lemma, it is sufficient to show for any non-zero q̃ ∈ Rd′ , there exists K̃ ∈ Rd′×N such that qTK = q̃T K̃. Since q̃ 6= 0, without loss of generality, assume its first entry q̃1 is non-zero. Let S = [s1, s2, · · · sN ] = qTK. For i = 1, 2, · · ·N , let the i-th column of K̃ be [ siq̃1 , 0, · · · , 0]
T . Then we can easily check that qTK = q̃T K̃.
Proof of Lemma 2. Picking a sufficiently small η, a continuous time limit can be taken to obtain the dynamics,
dvt dt = η |Ψ| ∑
(χ,y)∈Ψt
h(v̄(χ); y) exp(st)
Z(χ) , (11)
dst dt = η |Ψ| ∑
(χ,y)∈Ψt
(vt − v̄(χ))T h(v̄(χ); y) exp(st)
Z(χ) . (12)
which are equivalent to
dvt dt = η|Ψt| |Ψ|
〈 h(v̄(χ); y) exp(st)
Z(χ) 〉 Ψt , (13)
dst dt = η|Ψt| |Ψ|
〈 (vt − v̄(χ))T h(v̄(χ); y) exp(st)
Z(χ) 〉 Ψt . (14)
As h(v̄(χ); y) is assumed Lipschitz continuous, we have4
∣∣∣∣∣〈vt − v̄(χ)〉TΨt 〈 h(v̄(χ); y) exp(vt) Z(χ) 〉 Ψt − 〈 (vt − v̄(χ))Th(v̄(χ); y) exp(vt) Z(χ) 〉 Ψt ∣∣∣∣∣ < L √ dσ2 4 ,
(15) where L is the Lipschitz constant. Choosing a small enough σ2 so that L √ dσ2 is close to zero, we have
〈vt − v̄(χ)〉TΨt 〈 h(v̄(χ); y) exp(vt)
Z(χ) 〉 Ψt ≈ 〈 (vt − v̄(χ))Th(v̄(χ); y) exp(vt) Z(χ) 〉 Ψt .
Then, combining it with Eq (14) yields
dst dt = η|Ψt| |Ψ|
〈 (vt − v̄(χ))T h(v̄(χ); y) exp(st)
Z(χ) 〉 Ψt
≈ 〈vt − v̄(χ)〉TΨt η|Ψt| |Ψ|
〈 h(v̄(χ); y) exp(st)
Z(χ) 〉 Ψt = 〈vt − v̄(χ)〉TΨt dvt dt . (16)
4The derivation is given in Appendix D
Remarkably, the relation stated in Eq (16) is independent of h(v̄(χ); y). So the relationship does not depend on the architecture of the classifier (or later layers in general). Expand Eq (16),
dst dt = 〈vt − v̄(χ)〉TΨt dvt dt
= 〈 vt − exp(st) Z(χ) vt + ∑
w∈χ\{t}
exp(sw)
Z(χ) vw 〉T Ψt dvt dt
= 〈 ∑ w∈χ\{t} exp(sw)(vt − vw) Z(χ) 〉T Ψt dvt dt
= ∑
w∈χ\{t}
〈 exp(sw)
Z(χ) 〉 Ψt 〈vt − vw〉TΨt dvt dt
≈ ∑
w∈χ\{t}
〈exp(sw)/Z(χ \ t)〉Ψt 〈Z(χ)/Z(χ \ t)〉Ψt 〈vt − vw〉TΨt dvt dt .
The second last step is due to the independence between the score and embedding initialization, while the approximation in the last step is made as we assume all the scores of the non-topic words maintain the same during the entire training process. Rearranging the equation yields
dst dt
= ∑
w∈χ\{t}
〈 exp(sw)
Z(χ \ t) 〉 Ψt 〈vt − vw〉TΨt dvt dt 〈 exp(st) + Z(χ \ t) Z(χ \ t) 〉−1 Ψt
= ( vt − 〈v̄(χ \ t)〉Ψt )T dvt dt
〈 exp(st) + Z(χ \ t)
Z(χ \ t)
〉−1 Ψt
where v̄(χ \ t) = ∑w∈χ\{t} vw exp(sw)Z(χ\t) . Proof of Theorem 1. By Lemma 2, we have〈
exp(st) + Z(χ \ t) Z(χ \ t) 〉 Ψt dst dt = ( vt − 〈v̄(χ \ t)〉Ψt )T dvt dt
which is ( 1 + exp(st) 〈 1
Z(χ \ t) 〉 Ψt ) dst dt = ( vt − 〈v̄(χ \ t)〉Ψt )T dvt dt . (17)
By the fundamental theorem of calculus, integrating on both sides from t = t0 to t1 yields,[ st + exp(st) 〈 1
Z(χ \ t) 〉 Ψt ]t1 t0 = [ 1 2 ||vt − 〈v̄(χ \ t)〉Ψt || 2 2 ]t1 t0 . (18)
Proof of Corollary 1. Since the scores and the embeddings of the non-topic words are considered constant, we have 〈 1
Z(χ\t) 〉 Ψt
= 1m and 〈v̄(χ \ t)〉Ψt = 0. As vt is initialized with mean zero and a very small variance, ||vt(0)||22 ≈ 0. Then, Eq (9) can be written as
||vt(t)||2 = √ 2 ( st(t) + exp st(t)
m − 1 m
) .
Proof of Theorem 2 (sketch). Without loss of generality, pick topic word t and assume it corresponds to the ϕ-th topic. We prove the theorem by showing that as the number of epochs increases, for any sentence χ in Ψt, st → ∞ and softmax(UT v̄(χ))→ eϕ, where eϕ is the one-hot vector that the ϕ-th entry equals one and the rest are zeros. Let x = UT v̄(χ). Notice that the loss function 〈− log(softmaxϕ(x))〉Ψt is convex in terms of x. As U is fixed, the loss function is also convex in terms of v̄(χ). This implies, if the model is optimized by gradient descent, the gradient will lead v̄(χ) to its optimal solution v̄∗(χ). In our case, as the columns of U are linearly independent, there exists a vector n that are orthogonal to all the columns of U except Uϕ. Without loss of generality, assume Uϕ · n > 0 (otherwise, choose its inverse). Then a potential optimal solution is v̄∗(χ) = λn for λ goes to the infinity as v̄∗(χ) · Ui = 0 for i 6= ϕ and v̄∗(χ) · Uϕ → ∞, which implies softmax(UT v̄∗(χ)) = eϕ. Combined with the fact that there cannot be an optimal solution v∗∗(χ) of a finite norm such that softmax(UT v̄∗∗(χ)) = eϕ, the gradient must lead v̄(χ) to an optimal solution that is arbitrarily far away from the origin, which also applies to vt as it receives the same gradient up to a multiple according to Eq (7). As ||vt||22 increases unboundedly, by Theorem 1, the score st → ∞ as well. So we have softmax(UT v̄(χ)) → softmax(UT vt)→ eϕ and thus the cross-entropy loss drops to zero.
D THE DERIVATION OF EQ (15)
For a vector u, let u(ϕ) denote its ϕ-th entry. As we ignore the changes of the scores and the embeddings of non-topic words, their distributions maintain the same as the initial ones. In particular, the scores of the non-topic words are always zero. So, for ϕ = 1, 2, · · · d,
var ( v̄(ϕ)(χ) ) = var exp(st) Z(χ) v (ϕ) t + ∑ w∈χ\{t} exp (sw) Z(χ) v(ϕ)w =
∑ w∈χ\{t} ( exp(sw) Z(χ) )2 σ2 d = mσ2 (Z(χ))2d .
Since h(v̄(χ); y) is assumed Lipschitz continuous in v̄(χ), there exists L ∈ R such that for ϕ = 1, 2, · · · d,
|h(v̄(χ1); y)(ϕ) − h(v̄(χ2); y)(ϕ)| ≤ L||v̄(χ1)− v̄(χ2)||1,
where v̄(χ1), v̄(χ2) ∈ Rd and || ||1 denote the l1-distance by taking the sum of the absolute values of the entry differences on each dimension. So we also have∣∣∣∣ 1Lh(v̄(χ1); y)(ϕ) − 1Lh(v̄(χ2); y)(ϕ)
∣∣∣∣ ≤ ||v̄(χ1)− v̄(χ2)||1. According to the work of Bobkov & Houdrè (1996), for ϕ = 1, 2, · · · , d,
var(L−1h(v̄(χ); y)(ϕ)) ≤ ∑d ϕ=1 var ( v̄(ϕ)(χ) ) = mσ2 (Z(χ))2 ,
which is
var(h(v̄(χ); y)(ϕ)) ≤ mσ 2L2
(Z(χ))2 .
Then, the Cauchy-Schwarz inequality implies, for ϕ = 1, 2, · · · , d,∣∣∣∣cov(h(v̄(χ); y)(ϕ) exp(st)Z(χ) , v(ϕ)t − v̄(ϕ)(χ) )∣∣∣∣ = ∣∣∣∣cov(h(v̄(χ); y)(ϕ) exp(st)Z(χ) , v̄(ϕ)(χ) )∣∣∣∣ ≤ [ var ( h(v̄(χ); y)(ϕ) exp(st)
Z(χ)
)]1/2 [ var ( v̄(ϕ)(χ) )]1/2 < mσ2L exp(st)
(Z(χ))3 √ d
(19)
By the triangle inequality,∣∣∣∣∣〈vt − v̄(χ)〉TΨt 〈 h(v̄(χ); y) exp(st) Z(χ) 〉 Ψt − 〈 (vt − v̄(χ))Th(v̄(χ); y) exp(st) Z(χ) 〉 Ψt ∣∣∣∣∣ = ∣∣∣∣∣ d∑
ϕ=1
[〈 v
(ϕ) t − v̄(ϕ)(χ) 〉 Ψt 〈 h(v̄(χ); y)(ϕ) exp(st) Z(χ) 〉 Ψt − 〈 (v (ϕ) t − v̄(ϕ)(χ))h(v̄(χ); y)(ϕ) exp(st) Z(χ) 〉 Ψt ]∣∣∣∣∣ ≤
d∑ ϕ=1 ∣∣∣∣∣〈v(ϕ)t − v̄(ϕ)(χ)〉Ψt 〈 h(v̄(χ); y)(ϕ) exp(st) Z(χ) 〉 Ψt − 〈 (v (ϕ) t − v̄(ϕ)(χ))h(v̄(χ); y)(ϕ) exp(st) Z(χ) 〉 Ψt ∣∣∣∣∣ =
d∑ ϕ=1 ∣∣∣∣cov(h(v̄(χ); y)(ϕ) exp(st)Z(χ) , v(ϕ)t − v̄(ϕ)(χ) )∣∣∣∣
< mσ2L exp(st)
√ d
(Z(χ))3 by Eq (19)
= m Z(χ) · exp(st) Z(χ)
· L √ dσ2
Z(χ)
= ( 1− exp(st)
Z(χ) ) · exp(st) Z(χ) · L √ dσ2 Z(χ)
≤L √ dσ2
4Z(χ) ≤L √ dσ2
4 . The last line is due to Z(χ) > ∑ w∈χ\{t} exp(sw) = m ≥ 1. | 1. What is the focus of the paper regarding attention dynamics in topic modeling?
2. What are the strengths and weaknesses of the proposed approach, particularly in its theoretical analysis and experimental results?
3. How does the reviewer assess the applicability and novelty of the paper's contributions?
4. Are there any concerns regarding the paper's organization, notation, and clarity?
5. What is the suitability of the paper for different venues, such as ICLR or workshops on topic modeling or attention-based models? | Review | Review
This paper studies the dynamics of attention in a task of simplified topic modeling, over the course of training for a specific model, where the context vector is the sum over words in a sentence of their embedding weighted by the exponential of the dot-product their key embedding with a global query vector, normalized. Due to the simplification of the topic modeling problem (two null-intersect sets of words: topic vs. non-topic), they consider the embeddings of the non-topic words to be fixed over the course of training for their theoretical analysis. The applicability of the theoretical result is close to zero, and a somewhat known property (e.g. in word2vec, Mikolov et al. 2013). The experimental results include two parts. One on a tiny synthetic dataset that matches the simplified topic modeling problem and serves as illustration. The other is on SST2 and SST5 (movie comments and ratings, sentiment analysis), where the results are poor (obviously, as the model is simple), e.g. yielding 79.59% on SST while the SOTA is 97.4, and BERT base is at 91.2. The analysis is interesting, but does not lead to new insights.
For a simple analysis, the paper is at times hard to follow, and could benefit from more structure (presenting "what" before "how") and better notation (e.g. \nu, v,
v
all attached to (forms of the) the context vector).
Overall, the contribution does not seem sufficient enough for inclusion at ICLR. The paper could be a good fit for a workshop on topic modeling or attention-based models. |
ICLR | Title
On the Dynamics of Training Attention Models
Abstract
The attention mechanism has been widely used in deep neural networks as a model component. By now, it has become a critical building block in many state-of-the-art natural language models. Despite its great success established empirically, the working mechanism of attention has not been investigated at a sufficient theoretical depth to date. In this paper, we set up a simple text classification task and study the dynamics of training a simple attention-based classification model using gradient descent. In this setting, we show that, for the discriminative words that the model should attend to, a persisting identity exists relating its embedding and the inner product of its key and the query. This allows us to prove that training must converge to attending to the discriminative words when the attention output is classified by a linear classifier. Experiments are performed, which validate our theoretical analysis and provide further insights.
1 INTRODUCTION
Attention-based neural networks have been broadly adopted in many natural language models for machine translation (Bahdanau et al., 2014; Luong et al., 2015), sentiment classification (Wang et al., 2016), image caption generation (Xu et al., 2015), and the unsupervised representation learning (Devlin et al., 2019), etc. Particularly in the powerful transformers (Vaswani et al., 2017), attention is its key ingredient.
Despite its great successes established empirically, the working mechanism of attention has not been well understood (see Section 2). This paper sets up a simple text classification task and considers a basic neural network model with the most straightforward attention mechanism. We study the model’s training trajectory to understand why attention can attend to the discriminative words (referred to as the topic words). More specifically, in this task, each sentence is treated as a bag of words, and its class label, or topic, is indicated by a topic word. The model we consider involves a basic attention mechanism, which creates weighting factors to combine the word embedding vectors into a “context vector”; the context vector is then passed to a classifier.
In this setting, we prove a closed-form relationship between the topic word embedding norm and the inner product of its key and the query, referred to as the “score”, during gradient-descent training. It is particularly remarkable that this relationship holds irrespective of the classifier architecture or configuration. This relationship suggests the existence of a “synergy” in the amplification of the topic word score and its word embedding; that is, the growths of the two quantities promote each other. This, in turn, allows the topic word embedding to stand out rapidly in the context vector during training. Moreover, when the model takes a fixed linear classifier, this relationship allows rigorous proofs of this “mutual promotion” phenomenon and the convergence of training to the topic words.
Our theoretical results and their implications are corroborated by experiments performed on a synthetic dataset and real-world datasets. Additional insights are also obtained from these experiments. For example, low-capacity classifiers tend to give stronger training signals to the attention module. The “mutual promotion” effect implied by the discovered relationship can also exhibit itself as “mutual suppression” in the early training phase. Furthermore, in the real-world datasets, where perfect
delimitation of topic and non-topic words does not exist, interesting training dynamics is observed. Due to length constraints, all proofs are presented in Appendix.
2 RELATED WORKS
Since 2019, a series of works have been published to understand the working and behaviour of attention. One focus of these works pertains to understanding whether an attention mechanism can provide meaningful explanations (Michel et al., 2019; Voita et al., 2019; Jain & Wallace, 2019; Wiegreffe & Pinter, 2019; Serrano & Smith, 2020; Vashishth et al., 2020). Most of these works are empirical in nature, for example, by analyzing the behaviours of a well-trained attention-based model (Clark et al., 2019), or observing the impact of altering the output weights of the attention module or pruning a few heads (Michel et al., 2019; Voita et al., 2019), or a combination of them (Jain & Wallace, 2019; Vashishth et al., 2020). Apart from acquiring insights from experiments, Brunner et al. (2019) and Hahn (2020) show theoretically that the self-attention blocks lacks identifiability, where multiple weight configurations may give equally good end predictions. The non-uniqueness of the attention weights therefore makes the architecture lack interpretability.
As a fully connected neural network with infinite width can be seen as a Gaussian process (Lee et al., 2018), a few works apply this perspective to understanding attention with infinite number of heads and infinite width of the network layers (Yang, 2019; Hron et al., 2020). In this paper, we restrict our study to the more realist non-asymptotic regime.
3 PROBLEM SETUP
Learning Task To obtain insights into the training dynamics of attention models, we set up a simple topic classification task. Each input sentence contains m non-topic words and one topic word indicating its topic. Note that a topic may have multiple topic words, but a sentence is assumed to include only one of them. Assume that there are J topics that correspond to the mutually exclusive topic word sets T1, T2, · · · , TJ . Let T = ⋃J j=1 Tj be the set of all topic words. The non-topic words are drawn from a dictionary Θ, which are assumed not to contain any topic word.
The training set Ψ consists of sentence-topic pairs, where each pair (χ, y) is generated by (1) randomly pick a topic y ∈ {1, 2, · · · , J} (2) pick a topic word from set Ty and combine it with m words drawn uniformly at random from Θ to generate the sentence (or the bag of words) χ. In this task, one aims to develop a classifier from the training set that predicts the topic y for a random sentence χ generated in this way.
We will consider the case that |Θ| >> |T|, which implies that a topic word appears much more frequently in the sentences than a non-topic word.
Attention Model For this task, we consider a simple attention mechanism similar to the one proposed by Wang et al. (2016). Each word w is associated with two parameters: an embedding νw ∈ Rd and a key κw ∈ Rd ′ . Based on a global query q ∈ Rd′ , the context vector of sentence χ is computed by
ν̄(χ) = ∑ w∈χ νw exp(qTκw) Z(χ) , where Z(χ) = ∑ w′∈χ exp(q
Tκw′). Then ν̄(χ) is fed into a classifier that predicts the sentence’s topic in terms of a distribution over all topics.1
Denote the loss function by l(χ, y). Our upcoming analysis implies this attention model, although simple, may capture plenty of insight in understanding the training of more general attention models.
Problem Statement Our objective is to investigate the training dynamics, under gradient descent, of this attention model. In particular, we wish to understand if there is an intrinsic mechanism that allows the attention model to discover the topic word and accelerates training. Moreover, we wish to investigate, beyond this setup, how the model is optimized when there is no clear delimitation between topic and non-topic words, as in real-world data.
1The condition that the attention layer directly attends to the word embeddings merely serves to simplify the analysis in Section 4 but this condition is not required for most results presented in Sections 4 and 5. More discussions are given in Appendix A in this regard.
4 THEORETICAL ANALYSIS
It is common to fix some parameters when we train a model with limited resources. Also Lemma 1. Assume q 6= 0 when initialized. Fixing it does not affect the attention block’s capacity.
Thus, our upcoming discussion focuses on the case in which the query is fixed. Doing so also allows us to establish a closed-form expression connecting the word’s embedding and the inner product of its key and the query. In Appendix B, extra discussions and experimental results reveal that the trainability of the query does not affect the fundamental relationship we are about to present.
For a topic word t, let Ψt denote the training samples involving it. Then, by gradient descent,
∆νt = τ |Ψ| ∑
(χ,y)∈Ψt
∇ν̄(χ)l(χ, y) exp(qTκt)
Z(χ) (1)
∆κt = τ |Ψ| ∑
(χ,y)∈Ψt
q(νt − ν̄(χ))T ∇ν̄(χ)l(χ, y) exp(qTκt)
Z(χ) , (2)
where τ denote the learning rate. As it will turn out, an important quantity in this setting is the inner product qT kw of query q and the key kw, which we denote by sw, and refer to it as the score of the word w.
Denoting vw = ||q||2νw, η = τ ||q||2, v̄(χ) = ∑ w∈χ exp(sw) Z vw, and h(v̄(χ); y) = ∇ν̄(χ)l(χ, y), for a topic word t, the dynamics simplifies to
∆vt = η |Ψ| ∑
(χ,y)∈Ψt
h(v̄(χ); y) exp(st)
Z(χ) (3)
∆st = η |Ψ| ∑
(χ,y)∈Ψt
(vt − v̄(χ))T h(v̄(χ); y) exp(st)
Z(χ) . (4)
In the rest of the paper, whenever we refer to the embedding of word t, we actually mean vt not νt.
Our analysis assumes the word embeddings are sampled i.i.d. from a distribution with mean zero and variance σ 2
d , where σ 2 is assumed close to zero. The word keys and the query are also sampled from
zero mean distributions with a possibly different variance. We assume that this variance is so small that the initial word scores are approximately zero. This assumption of the initial configurations corresponds to the attention model starting as a word-averaging model, and allows us to investigate how the model deviates from this initial setting with training. We also assume the derivative h(v̄(χ); y) of ` is Lipschitz continuous in v̄(χ) throughout training. Further the assumption in Section 3 that the number of non-topic words |Θ| is much larger than the number of topic words |T| implies that with a sufficient number of training samples, the occurrence rate of a topic word is significantly higher than the non-topic ones. This then justifies the following assumption we will use throughout our analysis. Assumption 1. The scores and the embeddings of the non-topic words are nearly unchanged compared to their counterparts for the topic words.
Hence, our upcoming analysis will treat the scores and embeddings of the non-topic words as constants. Assumption 1 will be validated by experimental results presented in Section 5.
By selecting a sufficiently small η, we can take the gradient-descent updates in Eq (3) and Eq (4) to its continuous-time limit and get2
dvt dt = η |Ψ| ∑
(χ,y)∈Ψt
h(v̄(χ); y) exp(st)
Z(χ) (5)
dst dt = η |Ψ| ∑
(χ,y)∈Ψt
(vt − v̄(χ))T h(v̄(χ); y) exp(st)
Z(χ) . (6)
2Reversely, Eq (3) is a discretized approximation of Eq (5): vt(t+1)−vt(t) = ∫ t+1 t dvt(t ′) dt′ dt ′ ≈ 1· dvt(t) dt = ∆vt(t) . The approximation becomes accurate if vt(t + 1) is close to vt(t), which can be achieved by choosing a sufficiently small η. Likewise, Eq (4) is a discretized approximation of Eq (6).
We can then characterize the update of the score and the embedding of a topic word as a continuoustime dynamical system stated in Lemma 2. The same technique has been used to analyze the training of neural networks in other contexts (Saxe et al., 2014; Greydanus et al., 2019).
Lemma 2. For sufficiently small η and σ2, the score st and embedding vt of topic word t satisfy
dvt dt = η|Ψt| |Ψ|
〈 h(v̄(χ); y) exp(st)
Z(χ) 〉 Ψt , (7)
dst dt = [( vt − 〈v̄(χ \ t)〉Ψt )T dvt dt ]〈 exp(st) + Z(χ \ t) Z(χ \ t) 〉−1 Ψt , (8)
where Z(χ \ t) = ∑w∈χ\{t} exp(sw), v̄(χ \ t) = ∑w∈χ\{t} vw exp(sw)Z(χ\t) , and 〈 · 〉Ψt denotes taking sample mean over the set Ψt.
Eq (7) implies the speed of moving vt along the direction of 〈h(v̄(χ); y)〉Ψt is controlled by the attention weight exp(st)Z(χ) . Eq (8) shows that vt increases if and only if vt has a greater projection on 〈h(v̄(χ); y)〉Ψt than the weighted average of the non-topic word counterparts. Consider a simplified case where 〈h(v̄(χ); y)〉Ψt is fixed. Since the change of vt is much faster than the non-topic word counterparts, vt will have a larger projection on 〈h(v̄(χ); y)〉Ψt after a few epochs of training. Then st increases as well as its attention weight, which in turn speeds up the extension of the embedding vt. This observation reveals a mutual enhancement effect between the score increment and the embedding elongation. In fact such an effect exists in general, as stated in the theorem below, irrespective of whether 〈h(v̄(χ); y)〉Ψt is fixed. Theorem 1. In the setting of Lemma 2, from epoch t0 to t1, the topic word score st and its embedding vt satisfy [
st(t) + exp(st(t))
〈 1
Z(χ \ t) 〉 Ψt ]t1 t0 = [ 1 2 ||vt(t)− 〈v̄(χ \ t)〉Ψt || 2 2 ]t1 t0 . (9)
Following from Lemma 2, this theorem implies a positive relationship between the topic word score and the distance between vt and the non-topic word embedding average 〈v̄(χ \ t)〉Ψt . Remarkably this result makes no reference to 〈h(v̄(χ); y)〉Ψt , hence independent of it. This implies the identity in Eq (9) holds irrespective of the choice and setting of the classifier. Theorem 1 further implies a score and embedding norm (“SEN” in short) relationship for the topic words:
Corollary 1. In the context of Theorem 1, by setting t0 = 0 and t1 = t, Eq (9) is reduced to ||vt(t)||2 = √ 2 ( st(t) + exp st(t)
m − 1 m
) , (10)
The corollary indicates that ||vt(t)||2 is monotonically increasing with st(t). So, st increases if and only if the point vt departs from its initial location. That is, if the norm of the topic word embedding increases, it will be attended to. This result is independent of the configuration of all other network layers. Thus, if 〈h(v̄(χ); y)〉Ψt has a gradient field that pushes vt away from its original location, the topic word is expected to be attended to. This statement can be made precise, as in Theorem 2, when the model uses a linear classifier.
Theorem 2. Assume the model has a fixed classifier in the form c(v̄(χ)) = softmax(UT v̄(χ)), where the columns of U are linearly independent, and the model is trained using gradient descent with the cross-entropy loss. As training proceeds, the model will attend to the topic word in every input sentence and have its training loss approach zero.
It is notable that the theorem holds broadly for any arbitrary fixed linear classifier (subjective to the mild linear independence constraint of its parameter U ). Additionally, we anticipate that this result holds for a much wider family of classifiers including trainable and even nonlinear ones. But rigorous proof appears difficult to obtain in such settings, and we will corroborate this claim in an experimental study in Section 5.
To sum up, in this section, we have shown two main results: (a) there is a closed-form positive relationship, the SEN relationship, between the topic word score and its embedding norm, which is independent of the configuration of the classifier. (b) the model, equipped with a fixed linear classifier stated in Theorem 2, can be trained to have all topic words attended to.
5 EXPERIMENTAL STUDY
In this section, we first test our model on an artificial dataset generated through the procedure introduced in Section 3. The test corroborates our theoretical results and validates their assumptions. Our test results suggest that the attention mechanism introduces a synergy between the embedding and the score of topic words.
Another experiment is performed on the real datasets SST2 and SST5 (Socher et al., 2013). The experiment results suggest that the SEN relationship of topic words holds at least in initial training stages. As training proceeds, some words appear to deviate from the theoretical trajectories. Further analysis of this behaviour provides additional insights into the attention model’s training dynamics on real-world datasets, often possessing a much more complex structure as well as rich noise. We performed all experiments using PyTorch (Paszke et al., 2017).
We performed our experiments on three models, Attn-FC, Attn-TC and Attn-TL, having the same attention block but different classifiers. The first two have the classifier in form c(v̄(χ)) = softmax(UT v̄(χ)) and the last in form c(v̄(χ)) = softmax(UT2 ReLu(U T 1 v̄(χ)+b1)+b2). Except that the U in Attn-FC is fixed, other parameters of the three models are trainable and optimized using the cross-entropy loss.
Since a real-world dataset does not have a topic word as the sentence topic indicator, we introduce a word “topic purity” measurement to facilitate our discussion on the experiments performed on SST2. Let δ+w and δ
−(w) respectively denote the portions of the positive and negative sentences among all training samples containing word w. Then the topic purity of w is δ(w) = |δ+(w)− δ−(w)|. If δ(w) = 1, w is either a pure positive or negative topic word. If δ(w) = 0, δ+(w) = δ−(w) = 0.5, which implies w has a completely random topic correspondence.
5.1 EXPERIMENTS ON SYNTHETIC DATASETS
The artificial dataset, consisting of 800 training and 200 test samples, is generated through the procedure introduced in Section 3. The dataset has four topics, and each contains two topic words. There are 20 non-topic words per sentence and the non-topic word dictionary size M = 5, 000.
Our experiments use the same embedding dimension 15 for all three models. Regarding the classifiers, Attn-FC and Attn-TC adopt U ∈ R15×4, while Attn-TL takes U1 ∈ R15×10, b1 ∈ R10, U2 ∈ R10×4 and b2 ∈ R4. For the validation of Theorem 2, the columns of U in Attn-FC are set to be orthonormal and thus linear independent. Unless otherwise stated, the scores are set to zero and the embeddings are initialized by a normal distribution with mean zero and variance σ 2
d = 10 −6. We trained the
models using gradient descent with learning rate η = 0.1 for 5K epochs before measuring their prediction accuracy on the test samples. When training is completed, all three models achieve the training loss close to zero and the 100.0% test accuracy, which implies the trained models perfectly explain the training set’s variations and have a good generalization on the test set.
Verification of Assumption 1 and validation of Corollary 1 and Theorem 2. We repeated the experiments for five runs and plotted the empirical score distributions of the non-topic and topic words of the three well-trained models with their 95% confidence intervals in the first two graphs of Fig 1. Compared to the topic words, the scores of the non-topic words are nearly unchanged throughout the entire training process. Likewise, the next two plots show the embedding norms of the non-topic words are nearly constant, too. This implies Assumption 1 indeed holds. Fig 2 plots the empirical and the theoretical SEN curves of a randomly picked topic word for the three models, where the theoretical curve has the expression stated in Eq (10). The coincidence of the empirical and the theoretical curves in all three models validates the SEN relationship stated in Eq (10) and
its independence of the later layers.3 Moreover, Fig 1 (left two) shows the scores of the topic words exceed the non-topic word counterparts by two orders of magnitude, which implies the topic words are attended to in a well-trained model. As we have reported earlier, Attn-FC has the training loss roughly zero when the training is completed, Theorem 2 is confirmed.
Lower-capacity classifiers result in stronger attention effects. The comparison, among the SEN distributions of the topic words in the three models, implies that a lower-capacity classifier leads to greater topic word SEN, which means a more drastic attention decision. This happens because the classifier of a larger capacity can explain more variations of the sample distribution and has more freedom to accommodate and absorb the correcting gradient signals. As a result, the attention layer receives a weaker gradient on average, which makes the embeddings of the topic word extend less from the original point. Thus, as implied by Eq (10), the magnitudes of the scores of the topic words are dampened, and therefore a weaker attention effect will be expected. This observation hints that if we know attending to the right words can explain most of the variations in sample distributions, we should consider a low capacity classifier (or later layers in general). Alternatively, we may also freeze the classifier in the initial stage of a training process, forcing the attention layer to explain more variations. Remarkably, all these modifications do not affect the relationship stated in Eq (10).
Synergy between score growth and embedding elongation in topic words. Eq (10) implies a positive SEN relationship of the topic word. That is, a larger topic word embedding norm results in a larger score (and thus a larger attention weight), which in turn makes the embedding extend faster. To corroborate this claim, we performed an ablation study by considering two variants of Attn-TC. The first has the scores frozen (referred as Attn-TC-KF) and the second has the embeddings fixed (referred as Attn-TC-EF). In this experiment, the embeddings of the models are initialized by a normal distribution of mean zero and variance σ 2
d = 0.1. We trained all three models by gradient descent for 60K epochs with learning rate η = 0.1, which are sufficient for all three models to fully converge. All three trained models reached 100% test accuracy.
3In Appendix B, the experiments with trainable queries are also implemented. The results indicate that the trainability of queries do not affect the positive SEN relationship. Besides, the query fixed model has very similar training dynamics to the one with a trainable query and a large initial norm.
The first three graphs of Fig 3 describe the evolution of the three models in the first 3K training epochs. For a randomly picked topic word, the first plot shows its score in Attn-TC grows faster than the one in Attn-TC-EF. Note that the score in Attn-TC-EF finally surpasses the one in Attn-TC because Attn-TC has converged at around 1K epochs. Likewise, the word embedding norm in Attn-TC increases more rapidly than the one in Attn-TC-KF before Attn-TC converges. The observations imply the attention introduces a mutual enhancement effect on training the topic word’s score and its embedding, which makes Attn-TC enjoy the fastest training loss drop as shown in the third plot.
The mutual enhancement could become the mutual diminution in the early training stage if the initial embedding of the topic word has a negative projection on the direction that it will move along. This effect can be precisely characterized by Eq (8). Assume the embedding of a topic word is initialized to have a smaller projection on the gradient, passed from the classifier, than the average of the non-topic words. The reversed order of the projections makes the score of the topic word decrease as its embedding has a “negative effect” on the training loss compared to the average of the non-topic word embeddings. This will, in turn, impede the elongation of the embedding vector or even make it shrink (see the last two plots of Fig 3). The “negative effect” cannot last long because the topic word embedding moves along the gradient much faster than the non-topic words due to its high occurrence rate in the training samples. By Eq (8) again, dstdt will finally become positive. That is, the score of the topic word starts to increase and its attention weight will surpass the one of the word-averaging model (see the second last plot of Fig 3). Then, we start to observe the mutual enhancement effect, which is indicated in the last plot: the increase speed of the Attn-TC’s embedding norm exceeds the Attn-TC-KF’s since around the 370-th epoch.
5.2 EXPERIMENTS ON SST2 AND SST5
The second part of the experiment is performed on datasets SST2 and SST5, which contain movie comments and ratings (positive or negative in SST2 and one to five stars in SST5). For simplicity, we limit our discussion on Attn-FC and Attn-TC using the same configurations of our previous experiments except that the embedding dimension is set to 200. Remark that our goal is not to find a state-of-the-art algorithm but to verify our theoretical results and further investigate how an attention-based network works.
For both SST2 and SST5, we trained the two models by gradient descent with learning rate η = 0.1 combined with the early stopping technique (Prechelt, 2012) of patience 100. As PyTorch requires equal length sentences in a batch, we pad all the sentences to the same length and set the score of the padding symbol to the negative infinity. Under this configuration, the trained Attn-FC and Attn-TC reached 76.68% and 79.59% test accuracy on SST2 and 38.49% and 40.53% on SST5.
Validation of Corollary 1. As the true topic words are unknown in a real dataset, we checked the words of the largest fifty scores after the training is completed. We observed that most of the words have their SEN curves close to our theoretical prediction. We picked two words for each model-dataset combination and plotted the curves with their theoretical counterparts in Fig 4.
The competition of two topic word candidates of various occurrence rates and topic purity. To better understand how the attention block works, we investigated the case when the empirical and the theoretical SEN curves disagree. We limit our discussion on the model trained on SST2.
For both Attn-FC and Attn-TC, we noticed that there are mainly two types of deviations, as shown in the first two columns of Fig 5. In the first one, the theoretical curves overestimate the score growth of powerful in terms of the embedding norm, while best shown in the second column experiences a score drop combined with a failed theoretical prediction in the final training stage. These two types of the disagreement in fact occur in pairs, which is caused by a pair of words that one of them frequently appears in the training samples but has a low topic purity, while the other has the opposite.
Regarding the (powerful, best) pair, best appears in 128 training samples while 103 of them are positive. In comparison, powerful appears in the 36 training samples, and all of them are positive. The large difference in the number of samples makes the embedding norm of best extend much faster than powerful in the initial training process (Fig 5, third column). As the positive SEN relationship implies, a quicker embedding norm increase of best results in its faster score growth. Therefore, best will be more attended to ( Fig 5, last column), which thus further accelerates its SEN increase (this is the synergy relationship between the embedding and score that has been demonstrated in the ‘ablation’ study). This process does not stop until the gradient from the classifier diminishes because of its low topic purity (which is similar to the case when the label smoothing (Szegedy et al., 2016) is applied for alleviating the overfitting problem). In contrast, powerful is less trained initially due to its low occurrence rate. But its high topic purity makes the direction of the gradient stable, in which its embedding will steadily elongate. The elongation will, at last, let the embedding have a greater projection on the gradient vector than the average of the other words appearing in the same sentence. Thus, the score of powerful starts to increase as shown by Eq (8) and plotted in the second last column of Fig 5. In contrast, as the gradient magnitude drops, the embedding of best will extend in a decreasing speed; and its projection on the gradient, passed from the classifier, will finally be surpassed by the words co-occurring with it but having a higher topic purity (like powerful). Thus, its score starts to drop eventually.
The dynamics of topic purity of attended words The analysis of the inconsistency between the empirical and theoretical SEN curves hints that the disagreement are strongly related to the word’s topic purity and its occurrence rate. To better characterize their dynamics, for a word w, let Aw(t)
be the list, at epoch t, that records the attended word (or of the largest score) in the sentences containing w. If multiple words in a sentence have the same score, randomly pick one of them. Note that Aw(t) may contain repeating words as a word could be attended in multiple sentences. We selected w to be the words having the largest five scores in the well-trained Attn-FC and Attn-TC, respectively. At various epoch t, Fig 6 plots how the average of topic purity δ(w′) evolves for w′ ∈ Aw(t) as well as the average number of occurrence in the training samples. For both models, they initially attend to the words that mostly have low topic purity with a high occurrence rate. As the training proceeds, the average topic purity of the attended words increases while the average occurrence rate drops. At the end of the training, almost all the attended words have a close-to-one topic purity. Fig 7 shows the evolution of the average topic purity and the average occurrence rate of the attended words over the entire training set. While a similar changing pattern can be observed, the average topic purity is lower than the one presented in Fig 6 when the training is completed. We argue that this happens because some sentences do not have any high topic purity words or their high topic purity words have a too low occurrence rate to be sufficiently trained.
6 CONCLUSION
This paper investigated the dynamic of training a series of attention-based bag-of-word classifiers on a simple artificial topic classification task. We have shown a persisting closed-form positive SEN relationship for the word to which the model should attend to. This result is independent of the configurations of the later layers. Through the result, we have proved that the model must converge in attending to the topic word with the training loss close to zero if the output of the attention layer is fed into a fixed linear classifier. A list of experiments has confirmed these results.
The experimental results indicate that the attention block intends to make a more drastic decision if its later layers have a lower capacity. This hints the classifier’s limited capacity may help if “selecting” the right word explains most of the variations in the training samples. An ablation study shows a synergy between the topic word score’s growth and its embedding elongation, leading to a faster training loss drop than the fixed score and fixed embedding variants. Besides, we have shown that this “mutual promotion” effect can also exhibit itself as “mutual suppression” in the initial training stage.
We investigated the competition of two topic word candidates with large differences in the topic purity and the occurrence rate in the training samples. The words of a higher occurrence rate but possibly low topic purity are more likely to be attended to initially. However, as the training proceeds, the attended words are gradually replaced by those of higher topic purity.
A THE RESULTS OF THE MODEL THAT THE ATTENTION BLOCK ATTENDS TO A CNN LAYER
The main text focuses on a model that has the attention block directly attends to word embeddings. Such design simplifies our analysis but should not be considered as a pre-condition to keep our results valid. In particular, the positive SEN relationship generally holds regardless of other components of the network, which can be justified as follows. There are two correlated sources of gradient signals, one back-propagates from the score to update key and query, the other back-propagates from the classifier loss to update word embedding. The correlation governs the positive relationship between score and embedding norm. Although the integration of other modules makes the analysis harder, the relationship should persist. Hence, all the results depending on this relationship keep valid.
We empirically verify our claims by implementing an experiment similar to the one discussed in the main text. We modified Attn-TC (named Attn-TC-CNN) by adding two parallel CNN layers to respectively process the word embeddings and the keys before feeding them into the attention block. Then, we construct an analogous data generation process introduced in Section 3. Finally, we empirically show that Assumption 1 and the positive SEN relationship still hold.
The Attn-TC-CNN has the same configurations as the Attn-TC (introduced in Section 5.1) except that we added two parallel CNN layers to preprocess the word embeddings and the keys. Regarding the CNN layer processing the word embeddings, it has the kernel of size d × 2 and stride 1. We used d kernels to keep the word embedding dimension unchanged. So, for two consecutive words in a sentence, the CNN mixes their embeddings and produce a new one. Given that the sentence has m+ 1 words, the input embedding matrix has shape d× (m+ 1) and the output has shape d×m. Likewise, regarding the keys, the CNN layer processing them has the kernel size d′ × 2 and stride 1. And there are d′ kernels in total.
Consider a classification problem containing two topics A and B. The sentences of the two topics are generated by two Markov chains, respectively, which are constructed as follows:
1. Let Li (i = A,B) be two mutually exclusive sets of ordered word pairs. The word pairs do not contain repeating words.
2. For i = A,B:
(a) Initialize Markov Chain (MCi) of words in the dictionary such that from a word, there is a uniform probability of moving to any words (including itself) in one step.
(b) Group the word pairs in Li according to the leading words. For each group, let s denote the shared leading word and ei (i = 1, 2, · · · , n) the second. Set the probability of moving from s to ei in one step be 1n and those to the rest zero.
We call the word pairs in Li (i = A,B) the topic word pairs. For each pair of words, a new embedding and a new key are generated by feeding their original embeddings and keys into the CNNs. We refer to the new embedding as the topic word-pair embedding and the new key as the topic word-pair keys.
Likewise, for any other word pairs, the new generated embeddings and keys are referred to as the non-topic word-pair embeddings and keys.
Assume the training dataset contains sentences of length m + 1. We generate it by repeating the following procedure:
1. uniformly sample a topic i ∈ {A,B}. 2. uniformly sample a word pair from Li and let the first word be the starting word.
3. sample another m words by running the Markov process for m times.
In our experiments, the word dictionary has 200 words, and so there are 40, 000 word pairs. The sentence length is set to 10. For i = A,B, each Li contains two randomly picked word pairs as the topic word pairs. Note that we have ensured that LA and LB are mutually exclusive. We used 4, 000 samples for training and 1, 000 for testing. The model was trained for 3, 000 epochs and achieved 99.89% test accuracy.
We repeated the experiments for ten runs and plotted the distributions along with the 95% confidence interval of the scores and the embedding norms of the topic and the non-topic word pairs in Fig 8. Note that, the word pair embeddings and the keys are generated from the CNN layers. So the initial embeddings and the scores are not very closed to the origin. To facilitate the verification of Assumption 1, we centered them by subtracting their initial values before we plot their distributions. From Fig 8, we observe that the non-topic word pair scores and the embeddings are nearly unchanged in comparison to their counterparts of the topic ones. Therefore, we have shown that Assumption 1 is largely held even we have added CNN layers to process the word embeddings and the keys before feeding them into the attention block.
Randomly picking a topic word pair, we plotted its empirical and theoretical SEN curves in ten runs in Fig 9. The figure shows that the positive SEN relationship holds even if the attention layer attends to other layers instead of the word embeddings directly. Therefore, all the results due to the relationship keep valid.
B DISCUSSION AND EXPERIMENTAL RESULTS OF THE MODELS WITH A TRAINABLE QUERY
In Section 4, we assumed a fixed and non-trainable query which allows the derivation of a clean closed-form "SEN" relation. But it is worth emphasizing that the positive relationship between the score and the embedding norm in fact exists regardless of the trainability of the query. As we have mentioned in Appendix A, there are two correlated sources of gradient signals, one back-propagates from the score to update key and query, the other back-propagates from the classifier loss to update word embedding. This correlation governs the positive relationship between score and embedding norm. Whether the query is trainable does not alter the existence of this correlation, although a trainable query makes the analysis more difficult. In particular, when the query norm is large, the update of query is relatively negligible; thence, the training behaviour is similar to having a fixed query.
To verify our claims, we reimplemented the experiments introduced in Section 5.1 with the same configurations except that the query is trainable. In Fig 10, we plot the empirical and the theoretical SEN curves of a randomly picked topic word by training Attn-FC, Attn-TC and Attn-FC with a trainable query. We initialize the entries of the query vector by a normal distribution N(0, σ2). From left to right, σ2 increases as well as the initial query norm. We observe that regardless of how the query is initialized, the positive SEN relationship always preserves. Moreover, as the initial norm increases, the empirical curve approaches the theoretical curve having the expression in Eq (10). As we have discussed, this asymptotic approach happens since an increasing initial norm of the query makes its change negligible during the training process compared to its already big enough norm.
C PROOFS OF THE RESULTS IN SECTION 4
Proof of Lemma 1. Assume there are in total N words in the dictionary (including both the topic and the non-topic words). Sample the keys of the N words arbitrarily to generate K ∈ Rd′×N with the key vectors as its columns. Randomly pick q ∈ Rd′ as a query. To prove the lemma, it is sufficient to show for any non-zero q̃ ∈ Rd′ , there exists K̃ ∈ Rd′×N such that qTK = q̃T K̃. Since q̃ 6= 0, without loss of generality, assume its first entry q̃1 is non-zero. Let S = [s1, s2, · · · sN ] = qTK. For i = 1, 2, · · ·N , let the i-th column of K̃ be [ siq̃1 , 0, · · · , 0]
T . Then we can easily check that qTK = q̃T K̃.
Proof of Lemma 2. Picking a sufficiently small η, a continuous time limit can be taken to obtain the dynamics,
dvt dt = η |Ψ| ∑
(χ,y)∈Ψt
h(v̄(χ); y) exp(st)
Z(χ) , (11)
dst dt = η |Ψ| ∑
(χ,y)∈Ψt
(vt − v̄(χ))T h(v̄(χ); y) exp(st)
Z(χ) . (12)
which are equivalent to
dvt dt = η|Ψt| |Ψ|
〈 h(v̄(χ); y) exp(st)
Z(χ) 〉 Ψt , (13)
dst dt = η|Ψt| |Ψ|
〈 (vt − v̄(χ))T h(v̄(χ); y) exp(st)
Z(χ) 〉 Ψt . (14)
As h(v̄(χ); y) is assumed Lipschitz continuous, we have4
∣∣∣∣∣〈vt − v̄(χ)〉TΨt 〈 h(v̄(χ); y) exp(vt) Z(χ) 〉 Ψt − 〈 (vt − v̄(χ))Th(v̄(χ); y) exp(vt) Z(χ) 〉 Ψt ∣∣∣∣∣ < L √ dσ2 4 ,
(15) where L is the Lipschitz constant. Choosing a small enough σ2 so that L √ dσ2 is close to zero, we have
〈vt − v̄(χ)〉TΨt 〈 h(v̄(χ); y) exp(vt)
Z(χ) 〉 Ψt ≈ 〈 (vt − v̄(χ))Th(v̄(χ); y) exp(vt) Z(χ) 〉 Ψt .
Then, combining it with Eq (14) yields
dst dt = η|Ψt| |Ψ|
〈 (vt − v̄(χ))T h(v̄(χ); y) exp(st)
Z(χ) 〉 Ψt
≈ 〈vt − v̄(χ)〉TΨt η|Ψt| |Ψ|
〈 h(v̄(χ); y) exp(st)
Z(χ) 〉 Ψt = 〈vt − v̄(χ)〉TΨt dvt dt . (16)
4The derivation is given in Appendix D
Remarkably, the relation stated in Eq (16) is independent of h(v̄(χ); y). So the relationship does not depend on the architecture of the classifier (or later layers in general). Expand Eq (16),
dst dt = 〈vt − v̄(χ)〉TΨt dvt dt
= 〈 vt − exp(st) Z(χ) vt + ∑
w∈χ\{t}
exp(sw)
Z(χ) vw 〉T Ψt dvt dt
= 〈 ∑ w∈χ\{t} exp(sw)(vt − vw) Z(χ) 〉T Ψt dvt dt
= ∑
w∈χ\{t}
〈 exp(sw)
Z(χ) 〉 Ψt 〈vt − vw〉TΨt dvt dt
≈ ∑
w∈χ\{t}
〈exp(sw)/Z(χ \ t)〉Ψt 〈Z(χ)/Z(χ \ t)〉Ψt 〈vt − vw〉TΨt dvt dt .
The second last step is due to the independence between the score and embedding initialization, while the approximation in the last step is made as we assume all the scores of the non-topic words maintain the same during the entire training process. Rearranging the equation yields
dst dt
= ∑
w∈χ\{t}
〈 exp(sw)
Z(χ \ t) 〉 Ψt 〈vt − vw〉TΨt dvt dt 〈 exp(st) + Z(χ \ t) Z(χ \ t) 〉−1 Ψt
= ( vt − 〈v̄(χ \ t)〉Ψt )T dvt dt
〈 exp(st) + Z(χ \ t)
Z(χ \ t)
〉−1 Ψt
where v̄(χ \ t) = ∑w∈χ\{t} vw exp(sw)Z(χ\t) . Proof of Theorem 1. By Lemma 2, we have〈
exp(st) + Z(χ \ t) Z(χ \ t) 〉 Ψt dst dt = ( vt − 〈v̄(χ \ t)〉Ψt )T dvt dt
which is ( 1 + exp(st) 〈 1
Z(χ \ t) 〉 Ψt ) dst dt = ( vt − 〈v̄(χ \ t)〉Ψt )T dvt dt . (17)
By the fundamental theorem of calculus, integrating on both sides from t = t0 to t1 yields,[ st + exp(st) 〈 1
Z(χ \ t) 〉 Ψt ]t1 t0 = [ 1 2 ||vt − 〈v̄(χ \ t)〉Ψt || 2 2 ]t1 t0 . (18)
Proof of Corollary 1. Since the scores and the embeddings of the non-topic words are considered constant, we have 〈 1
Z(χ\t) 〉 Ψt
= 1m and 〈v̄(χ \ t)〉Ψt = 0. As vt is initialized with mean zero and a very small variance, ||vt(0)||22 ≈ 0. Then, Eq (9) can be written as
||vt(t)||2 = √ 2 ( st(t) + exp st(t)
m − 1 m
) .
Proof of Theorem 2 (sketch). Without loss of generality, pick topic word t and assume it corresponds to the ϕ-th topic. We prove the theorem by showing that as the number of epochs increases, for any sentence χ in Ψt, st → ∞ and softmax(UT v̄(χ))→ eϕ, where eϕ is the one-hot vector that the ϕ-th entry equals one and the rest are zeros. Let x = UT v̄(χ). Notice that the loss function 〈− log(softmaxϕ(x))〉Ψt is convex in terms of x. As U is fixed, the loss function is also convex in terms of v̄(χ). This implies, if the model is optimized by gradient descent, the gradient will lead v̄(χ) to its optimal solution v̄∗(χ). In our case, as the columns of U are linearly independent, there exists a vector n that are orthogonal to all the columns of U except Uϕ. Without loss of generality, assume Uϕ · n > 0 (otherwise, choose its inverse). Then a potential optimal solution is v̄∗(χ) = λn for λ goes to the infinity as v̄∗(χ) · Ui = 0 for i 6= ϕ and v̄∗(χ) · Uϕ → ∞, which implies softmax(UT v̄∗(χ)) = eϕ. Combined with the fact that there cannot be an optimal solution v∗∗(χ) of a finite norm such that softmax(UT v̄∗∗(χ)) = eϕ, the gradient must lead v̄(χ) to an optimal solution that is arbitrarily far away from the origin, which also applies to vt as it receives the same gradient up to a multiple according to Eq (7). As ||vt||22 increases unboundedly, by Theorem 1, the score st → ∞ as well. So we have softmax(UT v̄(χ)) → softmax(UT vt)→ eϕ and thus the cross-entropy loss drops to zero.
D THE DERIVATION OF EQ (15)
For a vector u, let u(ϕ) denote its ϕ-th entry. As we ignore the changes of the scores and the embeddings of non-topic words, their distributions maintain the same as the initial ones. In particular, the scores of the non-topic words are always zero. So, for ϕ = 1, 2, · · · d,
var ( v̄(ϕ)(χ) ) = var exp(st) Z(χ) v (ϕ) t + ∑ w∈χ\{t} exp (sw) Z(χ) v(ϕ)w =
∑ w∈χ\{t} ( exp(sw) Z(χ) )2 σ2 d = mσ2 (Z(χ))2d .
Since h(v̄(χ); y) is assumed Lipschitz continuous in v̄(χ), there exists L ∈ R such that for ϕ = 1, 2, · · · d,
|h(v̄(χ1); y)(ϕ) − h(v̄(χ2); y)(ϕ)| ≤ L||v̄(χ1)− v̄(χ2)||1,
where v̄(χ1), v̄(χ2) ∈ Rd and || ||1 denote the l1-distance by taking the sum of the absolute values of the entry differences on each dimension. So we also have∣∣∣∣ 1Lh(v̄(χ1); y)(ϕ) − 1Lh(v̄(χ2); y)(ϕ)
∣∣∣∣ ≤ ||v̄(χ1)− v̄(χ2)||1. According to the work of Bobkov & Houdrè (1996), for ϕ = 1, 2, · · · , d,
var(L−1h(v̄(χ); y)(ϕ)) ≤ ∑d ϕ=1 var ( v̄(ϕ)(χ) ) = mσ2 (Z(χ))2 ,
which is
var(h(v̄(χ); y)(ϕ)) ≤ mσ 2L2
(Z(χ))2 .
Then, the Cauchy-Schwarz inequality implies, for ϕ = 1, 2, · · · , d,∣∣∣∣cov(h(v̄(χ); y)(ϕ) exp(st)Z(χ) , v(ϕ)t − v̄(ϕ)(χ) )∣∣∣∣ = ∣∣∣∣cov(h(v̄(χ); y)(ϕ) exp(st)Z(χ) , v̄(ϕ)(χ) )∣∣∣∣ ≤ [ var ( h(v̄(χ); y)(ϕ) exp(st)
Z(χ)
)]1/2 [ var ( v̄(ϕ)(χ) )]1/2 < mσ2L exp(st)
(Z(χ))3 √ d
(19)
By the triangle inequality,∣∣∣∣∣〈vt − v̄(χ)〉TΨt 〈 h(v̄(χ); y) exp(st) Z(χ) 〉 Ψt − 〈 (vt − v̄(χ))Th(v̄(χ); y) exp(st) Z(χ) 〉 Ψt ∣∣∣∣∣ = ∣∣∣∣∣ d∑
ϕ=1
[〈 v
(ϕ) t − v̄(ϕ)(χ) 〉 Ψt 〈 h(v̄(χ); y)(ϕ) exp(st) Z(χ) 〉 Ψt − 〈 (v (ϕ) t − v̄(ϕ)(χ))h(v̄(χ); y)(ϕ) exp(st) Z(χ) 〉 Ψt ]∣∣∣∣∣ ≤
d∑ ϕ=1 ∣∣∣∣∣〈v(ϕ)t − v̄(ϕ)(χ)〉Ψt 〈 h(v̄(χ); y)(ϕ) exp(st) Z(χ) 〉 Ψt − 〈 (v (ϕ) t − v̄(ϕ)(χ))h(v̄(χ); y)(ϕ) exp(st) Z(χ) 〉 Ψt ∣∣∣∣∣ =
d∑ ϕ=1 ∣∣∣∣cov(h(v̄(χ); y)(ϕ) exp(st)Z(χ) , v(ϕ)t − v̄(ϕ)(χ) )∣∣∣∣
< mσ2L exp(st)
√ d
(Z(χ))3 by Eq (19)
= m Z(χ) · exp(st) Z(χ)
· L √ dσ2
Z(χ)
= ( 1− exp(st)
Z(χ) ) · exp(st) Z(χ) · L √ dσ2 Z(χ)
≤L √ dσ2
4Z(χ) ≤L √ dσ2
4 . The last line is due to Z(χ) > ∑ w∈χ\{t} exp(sw) = m ≥ 1. | 1. What is the focus of the paper regarding attention mechanisms in topic classification tasks?
2. What are the strengths of the proposed approach, particularly in terms of its novelty and contributions to the field?
3. What are the weaknesses of the paper, especially regarding notation overloading and writing quality?
4. Do you have any concerns or questions about the paper's assumptions and conclusions?
5. Are there any suggestions for improving the paper's readability and clarity? | Review | Review
(Summary)
The paper investigates the dynamics of attention mechanism by configurating a controlled experiment on a simple topic classification task and training via gradient descent. Each random sentence in the training data is synthesized to include only one topic word among many. Then the authors try to find an intrinsic mechanism that triggers the attention model to discover the topic word and accelerates training via mutual promotion. They further experiment the evolution of models during optimization when no clear distinction between topic and non-topic words exist like in real data.
(Originality and Contribution)
The paper proposes an artificial topic classification task and shows a positive score-and-embedding-norm relationship for the topic words to which the model must attend to. The authors also show that attention mechanism is highly helpful when the classifier has only limited capacity. They also demonstrate mutual promotion effect that leads a faster dropping of training loss than the fixed score and fixed embeddings. This discovery sounds to be original and relevant contribution to the field.
(Strength and Weakness) - Strength: Design and run novel controlled experiment. Extensive analysis. - Weakness: Too much notational overloading. Writing quality.
(Concerns, Questions, and Suggestions)
It is unclear why
M
>>
N
implies that a topic word appears more frequently in the sentences than a non-topic word. Section 3 describes that each sentence consists of only one topic word, then combining with
m
non-topic words drawn uniformly at random. When the total number of topics
N
is much smaller than the size of non-topic word dictionary
M
, what increases frequency of topic word?
Overall simplifying some notations and avoiding notational overloading would greatly increase the readability of the paper.
To reduce confusion, it would be great to change the iterator of summation for the partition function Z into
∑
w
′
∈
S
k
rather than using the same
w
.
In Lemma 1, assume
q
≠
0
->
q
≠
0
→
. |
ICLR | Title
On the Dynamics of Training Attention Models
Abstract
The attention mechanism has been widely used in deep neural networks as a model component. By now, it has become a critical building block in many state-of-the-art natural language models. Despite its great success established empirically, the working mechanism of attention has not been investigated at a sufficient theoretical depth to date. In this paper, we set up a simple text classification task and study the dynamics of training a simple attention-based classification model using gradient descent. In this setting, we show that, for the discriminative words that the model should attend to, a persisting identity exists relating its embedding and the inner product of its key and the query. This allows us to prove that training must converge to attending to the discriminative words when the attention output is classified by a linear classifier. Experiments are performed, which validate our theoretical analysis and provide further insights.
1 INTRODUCTION
Attention-based neural networks have been broadly adopted in many natural language models for machine translation (Bahdanau et al., 2014; Luong et al., 2015), sentiment classification (Wang et al., 2016), image caption generation (Xu et al., 2015), and the unsupervised representation learning (Devlin et al., 2019), etc. Particularly in the powerful transformers (Vaswani et al., 2017), attention is its key ingredient.
Despite its great successes established empirically, the working mechanism of attention has not been well understood (see Section 2). This paper sets up a simple text classification task and considers a basic neural network model with the most straightforward attention mechanism. We study the model’s training trajectory to understand why attention can attend to the discriminative words (referred to as the topic words). More specifically, in this task, each sentence is treated as a bag of words, and its class label, or topic, is indicated by a topic word. The model we consider involves a basic attention mechanism, which creates weighting factors to combine the word embedding vectors into a “context vector”; the context vector is then passed to a classifier.
In this setting, we prove a closed-form relationship between the topic word embedding norm and the inner product of its key and the query, referred to as the “score”, during gradient-descent training. It is particularly remarkable that this relationship holds irrespective of the classifier architecture or configuration. This relationship suggests the existence of a “synergy” in the amplification of the topic word score and its word embedding; that is, the growths of the two quantities promote each other. This, in turn, allows the topic word embedding to stand out rapidly in the context vector during training. Moreover, when the model takes a fixed linear classifier, this relationship allows rigorous proofs of this “mutual promotion” phenomenon and the convergence of training to the topic words.
Our theoretical results and their implications are corroborated by experiments performed on a synthetic dataset and real-world datasets. Additional insights are also obtained from these experiments. For example, low-capacity classifiers tend to give stronger training signals to the attention module. The “mutual promotion” effect implied by the discovered relationship can also exhibit itself as “mutual suppression” in the early training phase. Furthermore, in the real-world datasets, where perfect
delimitation of topic and non-topic words does not exist, interesting training dynamics is observed. Due to length constraints, all proofs are presented in Appendix.
2 RELATED WORKS
Since 2019, a series of works have been published to understand the working and behaviour of attention. One focus of these works pertains to understanding whether an attention mechanism can provide meaningful explanations (Michel et al., 2019; Voita et al., 2019; Jain & Wallace, 2019; Wiegreffe & Pinter, 2019; Serrano & Smith, 2020; Vashishth et al., 2020). Most of these works are empirical in nature, for example, by analyzing the behaviours of a well-trained attention-based model (Clark et al., 2019), or observing the impact of altering the output weights of the attention module or pruning a few heads (Michel et al., 2019; Voita et al., 2019), or a combination of them (Jain & Wallace, 2019; Vashishth et al., 2020). Apart from acquiring insights from experiments, Brunner et al. (2019) and Hahn (2020) show theoretically that the self-attention blocks lacks identifiability, where multiple weight configurations may give equally good end predictions. The non-uniqueness of the attention weights therefore makes the architecture lack interpretability.
As a fully connected neural network with infinite width can be seen as a Gaussian process (Lee et al., 2018), a few works apply this perspective to understanding attention with infinite number of heads and infinite width of the network layers (Yang, 2019; Hron et al., 2020). In this paper, we restrict our study to the more realist non-asymptotic regime.
3 PROBLEM SETUP
Learning Task To obtain insights into the training dynamics of attention models, we set up a simple topic classification task. Each input sentence contains m non-topic words and one topic word indicating its topic. Note that a topic may have multiple topic words, but a sentence is assumed to include only one of them. Assume that there are J topics that correspond to the mutually exclusive topic word sets T1, T2, · · · , TJ . Let T = ⋃J j=1 Tj be the set of all topic words. The non-topic words are drawn from a dictionary Θ, which are assumed not to contain any topic word.
The training set Ψ consists of sentence-topic pairs, where each pair (χ, y) is generated by (1) randomly pick a topic y ∈ {1, 2, · · · , J} (2) pick a topic word from set Ty and combine it with m words drawn uniformly at random from Θ to generate the sentence (or the bag of words) χ. In this task, one aims to develop a classifier from the training set that predicts the topic y for a random sentence χ generated in this way.
We will consider the case that |Θ| >> |T|, which implies that a topic word appears much more frequently in the sentences than a non-topic word.
Attention Model For this task, we consider a simple attention mechanism similar to the one proposed by Wang et al. (2016). Each word w is associated with two parameters: an embedding νw ∈ Rd and a key κw ∈ Rd ′ . Based on a global query q ∈ Rd′ , the context vector of sentence χ is computed by
ν̄(χ) = ∑ w∈χ νw exp(qTκw) Z(χ) , where Z(χ) = ∑ w′∈χ exp(q
Tκw′). Then ν̄(χ) is fed into a classifier that predicts the sentence’s topic in terms of a distribution over all topics.1
Denote the loss function by l(χ, y). Our upcoming analysis implies this attention model, although simple, may capture plenty of insight in understanding the training of more general attention models.
Problem Statement Our objective is to investigate the training dynamics, under gradient descent, of this attention model. In particular, we wish to understand if there is an intrinsic mechanism that allows the attention model to discover the topic word and accelerates training. Moreover, we wish to investigate, beyond this setup, how the model is optimized when there is no clear delimitation between topic and non-topic words, as in real-world data.
1The condition that the attention layer directly attends to the word embeddings merely serves to simplify the analysis in Section 4 but this condition is not required for most results presented in Sections 4 and 5. More discussions are given in Appendix A in this regard.
4 THEORETICAL ANALYSIS
It is common to fix some parameters when we train a model with limited resources. Also Lemma 1. Assume q 6= 0 when initialized. Fixing it does not affect the attention block’s capacity.
Thus, our upcoming discussion focuses on the case in which the query is fixed. Doing so also allows us to establish a closed-form expression connecting the word’s embedding and the inner product of its key and the query. In Appendix B, extra discussions and experimental results reveal that the trainability of the query does not affect the fundamental relationship we are about to present.
For a topic word t, let Ψt denote the training samples involving it. Then, by gradient descent,
∆νt = τ |Ψ| ∑
(χ,y)∈Ψt
∇ν̄(χ)l(χ, y) exp(qTκt)
Z(χ) (1)
∆κt = τ |Ψ| ∑
(χ,y)∈Ψt
q(νt − ν̄(χ))T ∇ν̄(χ)l(χ, y) exp(qTκt)
Z(χ) , (2)
where τ denote the learning rate. As it will turn out, an important quantity in this setting is the inner product qT kw of query q and the key kw, which we denote by sw, and refer to it as the score of the word w.
Denoting vw = ||q||2νw, η = τ ||q||2, v̄(χ) = ∑ w∈χ exp(sw) Z vw, and h(v̄(χ); y) = ∇ν̄(χ)l(χ, y), for a topic word t, the dynamics simplifies to
∆vt = η |Ψ| ∑
(χ,y)∈Ψt
h(v̄(χ); y) exp(st)
Z(χ) (3)
∆st = η |Ψ| ∑
(χ,y)∈Ψt
(vt − v̄(χ))T h(v̄(χ); y) exp(st)
Z(χ) . (4)
In the rest of the paper, whenever we refer to the embedding of word t, we actually mean vt not νt.
Our analysis assumes the word embeddings are sampled i.i.d. from a distribution with mean zero and variance σ 2
d , where σ 2 is assumed close to zero. The word keys and the query are also sampled from
zero mean distributions with a possibly different variance. We assume that this variance is so small that the initial word scores are approximately zero. This assumption of the initial configurations corresponds to the attention model starting as a word-averaging model, and allows us to investigate how the model deviates from this initial setting with training. We also assume the derivative h(v̄(χ); y) of ` is Lipschitz continuous in v̄(χ) throughout training. Further the assumption in Section 3 that the number of non-topic words |Θ| is much larger than the number of topic words |T| implies that with a sufficient number of training samples, the occurrence rate of a topic word is significantly higher than the non-topic ones. This then justifies the following assumption we will use throughout our analysis. Assumption 1. The scores and the embeddings of the non-topic words are nearly unchanged compared to their counterparts for the topic words.
Hence, our upcoming analysis will treat the scores and embeddings of the non-topic words as constants. Assumption 1 will be validated by experimental results presented in Section 5.
By selecting a sufficiently small η, we can take the gradient-descent updates in Eq (3) and Eq (4) to its continuous-time limit and get2
dvt dt = η |Ψ| ∑
(χ,y)∈Ψt
h(v̄(χ); y) exp(st)
Z(χ) (5)
dst dt = η |Ψ| ∑
(χ,y)∈Ψt
(vt − v̄(χ))T h(v̄(χ); y) exp(st)
Z(χ) . (6)
2Reversely, Eq (3) is a discretized approximation of Eq (5): vt(t+1)−vt(t) = ∫ t+1 t dvt(t ′) dt′ dt ′ ≈ 1· dvt(t) dt = ∆vt(t) . The approximation becomes accurate if vt(t + 1) is close to vt(t), which can be achieved by choosing a sufficiently small η. Likewise, Eq (4) is a discretized approximation of Eq (6).
We can then characterize the update of the score and the embedding of a topic word as a continuoustime dynamical system stated in Lemma 2. The same technique has been used to analyze the training of neural networks in other contexts (Saxe et al., 2014; Greydanus et al., 2019).
Lemma 2. For sufficiently small η and σ2, the score st and embedding vt of topic word t satisfy
dvt dt = η|Ψt| |Ψ|
〈 h(v̄(χ); y) exp(st)
Z(χ) 〉 Ψt , (7)
dst dt = [( vt − 〈v̄(χ \ t)〉Ψt )T dvt dt ]〈 exp(st) + Z(χ \ t) Z(χ \ t) 〉−1 Ψt , (8)
where Z(χ \ t) = ∑w∈χ\{t} exp(sw), v̄(χ \ t) = ∑w∈χ\{t} vw exp(sw)Z(χ\t) , and 〈 · 〉Ψt denotes taking sample mean over the set Ψt.
Eq (7) implies the speed of moving vt along the direction of 〈h(v̄(χ); y)〉Ψt is controlled by the attention weight exp(st)Z(χ) . Eq (8) shows that vt increases if and only if vt has a greater projection on 〈h(v̄(χ); y)〉Ψt than the weighted average of the non-topic word counterparts. Consider a simplified case where 〈h(v̄(χ); y)〉Ψt is fixed. Since the change of vt is much faster than the non-topic word counterparts, vt will have a larger projection on 〈h(v̄(χ); y)〉Ψt after a few epochs of training. Then st increases as well as its attention weight, which in turn speeds up the extension of the embedding vt. This observation reveals a mutual enhancement effect between the score increment and the embedding elongation. In fact such an effect exists in general, as stated in the theorem below, irrespective of whether 〈h(v̄(χ); y)〉Ψt is fixed. Theorem 1. In the setting of Lemma 2, from epoch t0 to t1, the topic word score st and its embedding vt satisfy [
st(t) + exp(st(t))
〈 1
Z(χ \ t) 〉 Ψt ]t1 t0 = [ 1 2 ||vt(t)− 〈v̄(χ \ t)〉Ψt || 2 2 ]t1 t0 . (9)
Following from Lemma 2, this theorem implies a positive relationship between the topic word score and the distance between vt and the non-topic word embedding average 〈v̄(χ \ t)〉Ψt . Remarkably this result makes no reference to 〈h(v̄(χ); y)〉Ψt , hence independent of it. This implies the identity in Eq (9) holds irrespective of the choice and setting of the classifier. Theorem 1 further implies a score and embedding norm (“SEN” in short) relationship for the topic words:
Corollary 1. In the context of Theorem 1, by setting t0 = 0 and t1 = t, Eq (9) is reduced to ||vt(t)||2 = √ 2 ( st(t) + exp st(t)
m − 1 m
) , (10)
The corollary indicates that ||vt(t)||2 is monotonically increasing with st(t). So, st increases if and only if the point vt departs from its initial location. That is, if the norm of the topic word embedding increases, it will be attended to. This result is independent of the configuration of all other network layers. Thus, if 〈h(v̄(χ); y)〉Ψt has a gradient field that pushes vt away from its original location, the topic word is expected to be attended to. This statement can be made precise, as in Theorem 2, when the model uses a linear classifier.
Theorem 2. Assume the model has a fixed classifier in the form c(v̄(χ)) = softmax(UT v̄(χ)), where the columns of U are linearly independent, and the model is trained using gradient descent with the cross-entropy loss. As training proceeds, the model will attend to the topic word in every input sentence and have its training loss approach zero.
It is notable that the theorem holds broadly for any arbitrary fixed linear classifier (subjective to the mild linear independence constraint of its parameter U ). Additionally, we anticipate that this result holds for a much wider family of classifiers including trainable and even nonlinear ones. But rigorous proof appears difficult to obtain in such settings, and we will corroborate this claim in an experimental study in Section 5.
To sum up, in this section, we have shown two main results: (a) there is a closed-form positive relationship, the SEN relationship, between the topic word score and its embedding norm, which is independent of the configuration of the classifier. (b) the model, equipped with a fixed linear classifier stated in Theorem 2, can be trained to have all topic words attended to.
5 EXPERIMENTAL STUDY
In this section, we first test our model on an artificial dataset generated through the procedure introduced in Section 3. The test corroborates our theoretical results and validates their assumptions. Our test results suggest that the attention mechanism introduces a synergy between the embedding and the score of topic words.
Another experiment is performed on the real datasets SST2 and SST5 (Socher et al., 2013). The experiment results suggest that the SEN relationship of topic words holds at least in initial training stages. As training proceeds, some words appear to deviate from the theoretical trajectories. Further analysis of this behaviour provides additional insights into the attention model’s training dynamics on real-world datasets, often possessing a much more complex structure as well as rich noise. We performed all experiments using PyTorch (Paszke et al., 2017).
We performed our experiments on three models, Attn-FC, Attn-TC and Attn-TL, having the same attention block but different classifiers. The first two have the classifier in form c(v̄(χ)) = softmax(UT v̄(χ)) and the last in form c(v̄(χ)) = softmax(UT2 ReLu(U T 1 v̄(χ)+b1)+b2). Except that the U in Attn-FC is fixed, other parameters of the three models are trainable and optimized using the cross-entropy loss.
Since a real-world dataset does not have a topic word as the sentence topic indicator, we introduce a word “topic purity” measurement to facilitate our discussion on the experiments performed on SST2. Let δ+w and δ
−(w) respectively denote the portions of the positive and negative sentences among all training samples containing word w. Then the topic purity of w is δ(w) = |δ+(w)− δ−(w)|. If δ(w) = 1, w is either a pure positive or negative topic word. If δ(w) = 0, δ+(w) = δ−(w) = 0.5, which implies w has a completely random topic correspondence.
5.1 EXPERIMENTS ON SYNTHETIC DATASETS
The artificial dataset, consisting of 800 training and 200 test samples, is generated through the procedure introduced in Section 3. The dataset has four topics, and each contains two topic words. There are 20 non-topic words per sentence and the non-topic word dictionary size M = 5, 000.
Our experiments use the same embedding dimension 15 for all three models. Regarding the classifiers, Attn-FC and Attn-TC adopt U ∈ R15×4, while Attn-TL takes U1 ∈ R15×10, b1 ∈ R10, U2 ∈ R10×4 and b2 ∈ R4. For the validation of Theorem 2, the columns of U in Attn-FC are set to be orthonormal and thus linear independent. Unless otherwise stated, the scores are set to zero and the embeddings are initialized by a normal distribution with mean zero and variance σ 2
d = 10 −6. We trained the
models using gradient descent with learning rate η = 0.1 for 5K epochs before measuring their prediction accuracy on the test samples. When training is completed, all three models achieve the training loss close to zero and the 100.0% test accuracy, which implies the trained models perfectly explain the training set’s variations and have a good generalization on the test set.
Verification of Assumption 1 and validation of Corollary 1 and Theorem 2. We repeated the experiments for five runs and plotted the empirical score distributions of the non-topic and topic words of the three well-trained models with their 95% confidence intervals in the first two graphs of Fig 1. Compared to the topic words, the scores of the non-topic words are nearly unchanged throughout the entire training process. Likewise, the next two plots show the embedding norms of the non-topic words are nearly constant, too. This implies Assumption 1 indeed holds. Fig 2 plots the empirical and the theoretical SEN curves of a randomly picked topic word for the three models, where the theoretical curve has the expression stated in Eq (10). The coincidence of the empirical and the theoretical curves in all three models validates the SEN relationship stated in Eq (10) and
its independence of the later layers.3 Moreover, Fig 1 (left two) shows the scores of the topic words exceed the non-topic word counterparts by two orders of magnitude, which implies the topic words are attended to in a well-trained model. As we have reported earlier, Attn-FC has the training loss roughly zero when the training is completed, Theorem 2 is confirmed.
Lower-capacity classifiers result in stronger attention effects. The comparison, among the SEN distributions of the topic words in the three models, implies that a lower-capacity classifier leads to greater topic word SEN, which means a more drastic attention decision. This happens because the classifier of a larger capacity can explain more variations of the sample distribution and has more freedom to accommodate and absorb the correcting gradient signals. As a result, the attention layer receives a weaker gradient on average, which makes the embeddings of the topic word extend less from the original point. Thus, as implied by Eq (10), the magnitudes of the scores of the topic words are dampened, and therefore a weaker attention effect will be expected. This observation hints that if we know attending to the right words can explain most of the variations in sample distributions, we should consider a low capacity classifier (or later layers in general). Alternatively, we may also freeze the classifier in the initial stage of a training process, forcing the attention layer to explain more variations. Remarkably, all these modifications do not affect the relationship stated in Eq (10).
Synergy between score growth and embedding elongation in topic words. Eq (10) implies a positive SEN relationship of the topic word. That is, a larger topic word embedding norm results in a larger score (and thus a larger attention weight), which in turn makes the embedding extend faster. To corroborate this claim, we performed an ablation study by considering two variants of Attn-TC. The first has the scores frozen (referred as Attn-TC-KF) and the second has the embeddings fixed (referred as Attn-TC-EF). In this experiment, the embeddings of the models are initialized by a normal distribution of mean zero and variance σ 2
d = 0.1. We trained all three models by gradient descent for 60K epochs with learning rate η = 0.1, which are sufficient for all three models to fully converge. All three trained models reached 100% test accuracy.
3In Appendix B, the experiments with trainable queries are also implemented. The results indicate that the trainability of queries do not affect the positive SEN relationship. Besides, the query fixed model has very similar training dynamics to the one with a trainable query and a large initial norm.
The first three graphs of Fig 3 describe the evolution of the three models in the first 3K training epochs. For a randomly picked topic word, the first plot shows its score in Attn-TC grows faster than the one in Attn-TC-EF. Note that the score in Attn-TC-EF finally surpasses the one in Attn-TC because Attn-TC has converged at around 1K epochs. Likewise, the word embedding norm in Attn-TC increases more rapidly than the one in Attn-TC-KF before Attn-TC converges. The observations imply the attention introduces a mutual enhancement effect on training the topic word’s score and its embedding, which makes Attn-TC enjoy the fastest training loss drop as shown in the third plot.
The mutual enhancement could become the mutual diminution in the early training stage if the initial embedding of the topic word has a negative projection on the direction that it will move along. This effect can be precisely characterized by Eq (8). Assume the embedding of a topic word is initialized to have a smaller projection on the gradient, passed from the classifier, than the average of the non-topic words. The reversed order of the projections makes the score of the topic word decrease as its embedding has a “negative effect” on the training loss compared to the average of the non-topic word embeddings. This will, in turn, impede the elongation of the embedding vector or even make it shrink (see the last two plots of Fig 3). The “negative effect” cannot last long because the topic word embedding moves along the gradient much faster than the non-topic words due to its high occurrence rate in the training samples. By Eq (8) again, dstdt will finally become positive. That is, the score of the topic word starts to increase and its attention weight will surpass the one of the word-averaging model (see the second last plot of Fig 3). Then, we start to observe the mutual enhancement effect, which is indicated in the last plot: the increase speed of the Attn-TC’s embedding norm exceeds the Attn-TC-KF’s since around the 370-th epoch.
5.2 EXPERIMENTS ON SST2 AND SST5
The second part of the experiment is performed on datasets SST2 and SST5, which contain movie comments and ratings (positive or negative in SST2 and one to five stars in SST5). For simplicity, we limit our discussion on Attn-FC and Attn-TC using the same configurations of our previous experiments except that the embedding dimension is set to 200. Remark that our goal is not to find a state-of-the-art algorithm but to verify our theoretical results and further investigate how an attention-based network works.
For both SST2 and SST5, we trained the two models by gradient descent with learning rate η = 0.1 combined with the early stopping technique (Prechelt, 2012) of patience 100. As PyTorch requires equal length sentences in a batch, we pad all the sentences to the same length and set the score of the padding symbol to the negative infinity. Under this configuration, the trained Attn-FC and Attn-TC reached 76.68% and 79.59% test accuracy on SST2 and 38.49% and 40.53% on SST5.
Validation of Corollary 1. As the true topic words are unknown in a real dataset, we checked the words of the largest fifty scores after the training is completed. We observed that most of the words have their SEN curves close to our theoretical prediction. We picked two words for each model-dataset combination and plotted the curves with their theoretical counterparts in Fig 4.
The competition of two topic word candidates of various occurrence rates and topic purity. To better understand how the attention block works, we investigated the case when the empirical and the theoretical SEN curves disagree. We limit our discussion on the model trained on SST2.
For both Attn-FC and Attn-TC, we noticed that there are mainly two types of deviations, as shown in the first two columns of Fig 5. In the first one, the theoretical curves overestimate the score growth of powerful in terms of the embedding norm, while best shown in the second column experiences a score drop combined with a failed theoretical prediction in the final training stage. These two types of the disagreement in fact occur in pairs, which is caused by a pair of words that one of them frequently appears in the training samples but has a low topic purity, while the other has the opposite.
Regarding the (powerful, best) pair, best appears in 128 training samples while 103 of them are positive. In comparison, powerful appears in the 36 training samples, and all of them are positive. The large difference in the number of samples makes the embedding norm of best extend much faster than powerful in the initial training process (Fig 5, third column). As the positive SEN relationship implies, a quicker embedding norm increase of best results in its faster score growth. Therefore, best will be more attended to ( Fig 5, last column), which thus further accelerates its SEN increase (this is the synergy relationship between the embedding and score that has been demonstrated in the ‘ablation’ study). This process does not stop until the gradient from the classifier diminishes because of its low topic purity (which is similar to the case when the label smoothing (Szegedy et al., 2016) is applied for alleviating the overfitting problem). In contrast, powerful is less trained initially due to its low occurrence rate. But its high topic purity makes the direction of the gradient stable, in which its embedding will steadily elongate. The elongation will, at last, let the embedding have a greater projection on the gradient vector than the average of the other words appearing in the same sentence. Thus, the score of powerful starts to increase as shown by Eq (8) and plotted in the second last column of Fig 5. In contrast, as the gradient magnitude drops, the embedding of best will extend in a decreasing speed; and its projection on the gradient, passed from the classifier, will finally be surpassed by the words co-occurring with it but having a higher topic purity (like powerful). Thus, its score starts to drop eventually.
The dynamics of topic purity of attended words The analysis of the inconsistency between the empirical and theoretical SEN curves hints that the disagreement are strongly related to the word’s topic purity and its occurrence rate. To better characterize their dynamics, for a word w, let Aw(t)
be the list, at epoch t, that records the attended word (or of the largest score) in the sentences containing w. If multiple words in a sentence have the same score, randomly pick one of them. Note that Aw(t) may contain repeating words as a word could be attended in multiple sentences. We selected w to be the words having the largest five scores in the well-trained Attn-FC and Attn-TC, respectively. At various epoch t, Fig 6 plots how the average of topic purity δ(w′) evolves for w′ ∈ Aw(t) as well as the average number of occurrence in the training samples. For both models, they initially attend to the words that mostly have low topic purity with a high occurrence rate. As the training proceeds, the average topic purity of the attended words increases while the average occurrence rate drops. At the end of the training, almost all the attended words have a close-to-one topic purity. Fig 7 shows the evolution of the average topic purity and the average occurrence rate of the attended words over the entire training set. While a similar changing pattern can be observed, the average topic purity is lower than the one presented in Fig 6 when the training is completed. We argue that this happens because some sentences do not have any high topic purity words or their high topic purity words have a too low occurrence rate to be sufficiently trained.
6 CONCLUSION
This paper investigated the dynamic of training a series of attention-based bag-of-word classifiers on a simple artificial topic classification task. We have shown a persisting closed-form positive SEN relationship for the word to which the model should attend to. This result is independent of the configurations of the later layers. Through the result, we have proved that the model must converge in attending to the topic word with the training loss close to zero if the output of the attention layer is fed into a fixed linear classifier. A list of experiments has confirmed these results.
The experimental results indicate that the attention block intends to make a more drastic decision if its later layers have a lower capacity. This hints the classifier’s limited capacity may help if “selecting” the right word explains most of the variations in the training samples. An ablation study shows a synergy between the topic word score’s growth and its embedding elongation, leading to a faster training loss drop than the fixed score and fixed embedding variants. Besides, we have shown that this “mutual promotion” effect can also exhibit itself as “mutual suppression” in the initial training stage.
We investigated the competition of two topic word candidates with large differences in the topic purity and the occurrence rate in the training samples. The words of a higher occurrence rate but possibly low topic purity are more likely to be attended to initially. However, as the training proceeds, the attended words are gradually replaced by those of higher topic purity.
A THE RESULTS OF THE MODEL THAT THE ATTENTION BLOCK ATTENDS TO A CNN LAYER
The main text focuses on a model that has the attention block directly attends to word embeddings. Such design simplifies our analysis but should not be considered as a pre-condition to keep our results valid. In particular, the positive SEN relationship generally holds regardless of other components of the network, which can be justified as follows. There are two correlated sources of gradient signals, one back-propagates from the score to update key and query, the other back-propagates from the classifier loss to update word embedding. The correlation governs the positive relationship between score and embedding norm. Although the integration of other modules makes the analysis harder, the relationship should persist. Hence, all the results depending on this relationship keep valid.
We empirically verify our claims by implementing an experiment similar to the one discussed in the main text. We modified Attn-TC (named Attn-TC-CNN) by adding two parallel CNN layers to respectively process the word embeddings and the keys before feeding them into the attention block. Then, we construct an analogous data generation process introduced in Section 3. Finally, we empirically show that Assumption 1 and the positive SEN relationship still hold.
The Attn-TC-CNN has the same configurations as the Attn-TC (introduced in Section 5.1) except that we added two parallel CNN layers to preprocess the word embeddings and the keys. Regarding the CNN layer processing the word embeddings, it has the kernel of size d × 2 and stride 1. We used d kernels to keep the word embedding dimension unchanged. So, for two consecutive words in a sentence, the CNN mixes their embeddings and produce a new one. Given that the sentence has m+ 1 words, the input embedding matrix has shape d× (m+ 1) and the output has shape d×m. Likewise, regarding the keys, the CNN layer processing them has the kernel size d′ × 2 and stride 1. And there are d′ kernels in total.
Consider a classification problem containing two topics A and B. The sentences of the two topics are generated by two Markov chains, respectively, which are constructed as follows:
1. Let Li (i = A,B) be two mutually exclusive sets of ordered word pairs. The word pairs do not contain repeating words.
2. For i = A,B:
(a) Initialize Markov Chain (MCi) of words in the dictionary such that from a word, there is a uniform probability of moving to any words (including itself) in one step.
(b) Group the word pairs in Li according to the leading words. For each group, let s denote the shared leading word and ei (i = 1, 2, · · · , n) the second. Set the probability of moving from s to ei in one step be 1n and those to the rest zero.
We call the word pairs in Li (i = A,B) the topic word pairs. For each pair of words, a new embedding and a new key are generated by feeding their original embeddings and keys into the CNNs. We refer to the new embedding as the topic word-pair embedding and the new key as the topic word-pair keys.
Likewise, for any other word pairs, the new generated embeddings and keys are referred to as the non-topic word-pair embeddings and keys.
Assume the training dataset contains sentences of length m + 1. We generate it by repeating the following procedure:
1. uniformly sample a topic i ∈ {A,B}. 2. uniformly sample a word pair from Li and let the first word be the starting word.
3. sample another m words by running the Markov process for m times.
In our experiments, the word dictionary has 200 words, and so there are 40, 000 word pairs. The sentence length is set to 10. For i = A,B, each Li contains two randomly picked word pairs as the topic word pairs. Note that we have ensured that LA and LB are mutually exclusive. We used 4, 000 samples for training and 1, 000 for testing. The model was trained for 3, 000 epochs and achieved 99.89% test accuracy.
We repeated the experiments for ten runs and plotted the distributions along with the 95% confidence interval of the scores and the embedding norms of the topic and the non-topic word pairs in Fig 8. Note that, the word pair embeddings and the keys are generated from the CNN layers. So the initial embeddings and the scores are not very closed to the origin. To facilitate the verification of Assumption 1, we centered them by subtracting their initial values before we plot their distributions. From Fig 8, we observe that the non-topic word pair scores and the embeddings are nearly unchanged in comparison to their counterparts of the topic ones. Therefore, we have shown that Assumption 1 is largely held even we have added CNN layers to process the word embeddings and the keys before feeding them into the attention block.
Randomly picking a topic word pair, we plotted its empirical and theoretical SEN curves in ten runs in Fig 9. The figure shows that the positive SEN relationship holds even if the attention layer attends to other layers instead of the word embeddings directly. Therefore, all the results due to the relationship keep valid.
B DISCUSSION AND EXPERIMENTAL RESULTS OF THE MODELS WITH A TRAINABLE QUERY
In Section 4, we assumed a fixed and non-trainable query which allows the derivation of a clean closed-form "SEN" relation. But it is worth emphasizing that the positive relationship between the score and the embedding norm in fact exists regardless of the trainability of the query. As we have mentioned in Appendix A, there are two correlated sources of gradient signals, one back-propagates from the score to update key and query, the other back-propagates from the classifier loss to update word embedding. This correlation governs the positive relationship between score and embedding norm. Whether the query is trainable does not alter the existence of this correlation, although a trainable query makes the analysis more difficult. In particular, when the query norm is large, the update of query is relatively negligible; thence, the training behaviour is similar to having a fixed query.
To verify our claims, we reimplemented the experiments introduced in Section 5.1 with the same configurations except that the query is trainable. In Fig 10, we plot the empirical and the theoretical SEN curves of a randomly picked topic word by training Attn-FC, Attn-TC and Attn-FC with a trainable query. We initialize the entries of the query vector by a normal distribution N(0, σ2). From left to right, σ2 increases as well as the initial query norm. We observe that regardless of how the query is initialized, the positive SEN relationship always preserves. Moreover, as the initial norm increases, the empirical curve approaches the theoretical curve having the expression in Eq (10). As we have discussed, this asymptotic approach happens since an increasing initial norm of the query makes its change negligible during the training process compared to its already big enough norm.
C PROOFS OF THE RESULTS IN SECTION 4
Proof of Lemma 1. Assume there are in total N words in the dictionary (including both the topic and the non-topic words). Sample the keys of the N words arbitrarily to generate K ∈ Rd′×N with the key vectors as its columns. Randomly pick q ∈ Rd′ as a query. To prove the lemma, it is sufficient to show for any non-zero q̃ ∈ Rd′ , there exists K̃ ∈ Rd′×N such that qTK = q̃T K̃. Since q̃ 6= 0, without loss of generality, assume its first entry q̃1 is non-zero. Let S = [s1, s2, · · · sN ] = qTK. For i = 1, 2, · · ·N , let the i-th column of K̃ be [ siq̃1 , 0, · · · , 0]
T . Then we can easily check that qTK = q̃T K̃.
Proof of Lemma 2. Picking a sufficiently small η, a continuous time limit can be taken to obtain the dynamics,
dvt dt = η |Ψ| ∑
(χ,y)∈Ψt
h(v̄(χ); y) exp(st)
Z(χ) , (11)
dst dt = η |Ψ| ∑
(χ,y)∈Ψt
(vt − v̄(χ))T h(v̄(χ); y) exp(st)
Z(χ) . (12)
which are equivalent to
dvt dt = η|Ψt| |Ψ|
〈 h(v̄(χ); y) exp(st)
Z(χ) 〉 Ψt , (13)
dst dt = η|Ψt| |Ψ|
〈 (vt − v̄(χ))T h(v̄(χ); y) exp(st)
Z(χ) 〉 Ψt . (14)
As h(v̄(χ); y) is assumed Lipschitz continuous, we have4
∣∣∣∣∣〈vt − v̄(χ)〉TΨt 〈 h(v̄(χ); y) exp(vt) Z(χ) 〉 Ψt − 〈 (vt − v̄(χ))Th(v̄(χ); y) exp(vt) Z(χ) 〉 Ψt ∣∣∣∣∣ < L √ dσ2 4 ,
(15) where L is the Lipschitz constant. Choosing a small enough σ2 so that L √ dσ2 is close to zero, we have
〈vt − v̄(χ)〉TΨt 〈 h(v̄(χ); y) exp(vt)
Z(χ) 〉 Ψt ≈ 〈 (vt − v̄(χ))Th(v̄(χ); y) exp(vt) Z(χ) 〉 Ψt .
Then, combining it with Eq (14) yields
dst dt = η|Ψt| |Ψ|
〈 (vt − v̄(χ))T h(v̄(χ); y) exp(st)
Z(χ) 〉 Ψt
≈ 〈vt − v̄(χ)〉TΨt η|Ψt| |Ψ|
〈 h(v̄(χ); y) exp(st)
Z(χ) 〉 Ψt = 〈vt − v̄(χ)〉TΨt dvt dt . (16)
4The derivation is given in Appendix D
Remarkably, the relation stated in Eq (16) is independent of h(v̄(χ); y). So the relationship does not depend on the architecture of the classifier (or later layers in general). Expand Eq (16),
dst dt = 〈vt − v̄(χ)〉TΨt dvt dt
= 〈 vt − exp(st) Z(χ) vt + ∑
w∈χ\{t}
exp(sw)
Z(χ) vw 〉T Ψt dvt dt
= 〈 ∑ w∈χ\{t} exp(sw)(vt − vw) Z(χ) 〉T Ψt dvt dt
= ∑
w∈χ\{t}
〈 exp(sw)
Z(χ) 〉 Ψt 〈vt − vw〉TΨt dvt dt
≈ ∑
w∈χ\{t}
〈exp(sw)/Z(χ \ t)〉Ψt 〈Z(χ)/Z(χ \ t)〉Ψt 〈vt − vw〉TΨt dvt dt .
The second last step is due to the independence between the score and embedding initialization, while the approximation in the last step is made as we assume all the scores of the non-topic words maintain the same during the entire training process. Rearranging the equation yields
dst dt
= ∑
w∈χ\{t}
〈 exp(sw)
Z(χ \ t) 〉 Ψt 〈vt − vw〉TΨt dvt dt 〈 exp(st) + Z(χ \ t) Z(χ \ t) 〉−1 Ψt
= ( vt − 〈v̄(χ \ t)〉Ψt )T dvt dt
〈 exp(st) + Z(χ \ t)
Z(χ \ t)
〉−1 Ψt
where v̄(χ \ t) = ∑w∈χ\{t} vw exp(sw)Z(χ\t) . Proof of Theorem 1. By Lemma 2, we have〈
exp(st) + Z(χ \ t) Z(χ \ t) 〉 Ψt dst dt = ( vt − 〈v̄(χ \ t)〉Ψt )T dvt dt
which is ( 1 + exp(st) 〈 1
Z(χ \ t) 〉 Ψt ) dst dt = ( vt − 〈v̄(χ \ t)〉Ψt )T dvt dt . (17)
By the fundamental theorem of calculus, integrating on both sides from t = t0 to t1 yields,[ st + exp(st) 〈 1
Z(χ \ t) 〉 Ψt ]t1 t0 = [ 1 2 ||vt − 〈v̄(χ \ t)〉Ψt || 2 2 ]t1 t0 . (18)
Proof of Corollary 1. Since the scores and the embeddings of the non-topic words are considered constant, we have 〈 1
Z(χ\t) 〉 Ψt
= 1m and 〈v̄(χ \ t)〉Ψt = 0. As vt is initialized with mean zero and a very small variance, ||vt(0)||22 ≈ 0. Then, Eq (9) can be written as
||vt(t)||2 = √ 2 ( st(t) + exp st(t)
m − 1 m
) .
Proof of Theorem 2 (sketch). Without loss of generality, pick topic word t and assume it corresponds to the ϕ-th topic. We prove the theorem by showing that as the number of epochs increases, for any sentence χ in Ψt, st → ∞ and softmax(UT v̄(χ))→ eϕ, where eϕ is the one-hot vector that the ϕ-th entry equals one and the rest are zeros. Let x = UT v̄(χ). Notice that the loss function 〈− log(softmaxϕ(x))〉Ψt is convex in terms of x. As U is fixed, the loss function is also convex in terms of v̄(χ). This implies, if the model is optimized by gradient descent, the gradient will lead v̄(χ) to its optimal solution v̄∗(χ). In our case, as the columns of U are linearly independent, there exists a vector n that are orthogonal to all the columns of U except Uϕ. Without loss of generality, assume Uϕ · n > 0 (otherwise, choose its inverse). Then a potential optimal solution is v̄∗(χ) = λn for λ goes to the infinity as v̄∗(χ) · Ui = 0 for i 6= ϕ and v̄∗(χ) · Uϕ → ∞, which implies softmax(UT v̄∗(χ)) = eϕ. Combined with the fact that there cannot be an optimal solution v∗∗(χ) of a finite norm such that softmax(UT v̄∗∗(χ)) = eϕ, the gradient must lead v̄(χ) to an optimal solution that is arbitrarily far away from the origin, which also applies to vt as it receives the same gradient up to a multiple according to Eq (7). As ||vt||22 increases unboundedly, by Theorem 1, the score st → ∞ as well. So we have softmax(UT v̄(χ)) → softmax(UT vt)→ eϕ and thus the cross-entropy loss drops to zero.
D THE DERIVATION OF EQ (15)
For a vector u, let u(ϕ) denote its ϕ-th entry. As we ignore the changes of the scores and the embeddings of non-topic words, their distributions maintain the same as the initial ones. In particular, the scores of the non-topic words are always zero. So, for ϕ = 1, 2, · · · d,
var ( v̄(ϕ)(χ) ) = var exp(st) Z(χ) v (ϕ) t + ∑ w∈χ\{t} exp (sw) Z(χ) v(ϕ)w =
∑ w∈χ\{t} ( exp(sw) Z(χ) )2 σ2 d = mσ2 (Z(χ))2d .
Since h(v̄(χ); y) is assumed Lipschitz continuous in v̄(χ), there exists L ∈ R such that for ϕ = 1, 2, · · · d,
|h(v̄(χ1); y)(ϕ) − h(v̄(χ2); y)(ϕ)| ≤ L||v̄(χ1)− v̄(χ2)||1,
where v̄(χ1), v̄(χ2) ∈ Rd and || ||1 denote the l1-distance by taking the sum of the absolute values of the entry differences on each dimension. So we also have∣∣∣∣ 1Lh(v̄(χ1); y)(ϕ) − 1Lh(v̄(χ2); y)(ϕ)
∣∣∣∣ ≤ ||v̄(χ1)− v̄(χ2)||1. According to the work of Bobkov & Houdrè (1996), for ϕ = 1, 2, · · · , d,
var(L−1h(v̄(χ); y)(ϕ)) ≤ ∑d ϕ=1 var ( v̄(ϕ)(χ) ) = mσ2 (Z(χ))2 ,
which is
var(h(v̄(χ); y)(ϕ)) ≤ mσ 2L2
(Z(χ))2 .
Then, the Cauchy-Schwarz inequality implies, for ϕ = 1, 2, · · · , d,∣∣∣∣cov(h(v̄(χ); y)(ϕ) exp(st)Z(χ) , v(ϕ)t − v̄(ϕ)(χ) )∣∣∣∣ = ∣∣∣∣cov(h(v̄(χ); y)(ϕ) exp(st)Z(χ) , v̄(ϕ)(χ) )∣∣∣∣ ≤ [ var ( h(v̄(χ); y)(ϕ) exp(st)
Z(χ)
)]1/2 [ var ( v̄(ϕ)(χ) )]1/2 < mσ2L exp(st)
(Z(χ))3 √ d
(19)
By the triangle inequality,∣∣∣∣∣〈vt − v̄(χ)〉TΨt 〈 h(v̄(χ); y) exp(st) Z(χ) 〉 Ψt − 〈 (vt − v̄(χ))Th(v̄(χ); y) exp(st) Z(χ) 〉 Ψt ∣∣∣∣∣ = ∣∣∣∣∣ d∑
ϕ=1
[〈 v
(ϕ) t − v̄(ϕ)(χ) 〉 Ψt 〈 h(v̄(χ); y)(ϕ) exp(st) Z(χ) 〉 Ψt − 〈 (v (ϕ) t − v̄(ϕ)(χ))h(v̄(χ); y)(ϕ) exp(st) Z(χ) 〉 Ψt ]∣∣∣∣∣ ≤
d∑ ϕ=1 ∣∣∣∣∣〈v(ϕ)t − v̄(ϕ)(χ)〉Ψt 〈 h(v̄(χ); y)(ϕ) exp(st) Z(χ) 〉 Ψt − 〈 (v (ϕ) t − v̄(ϕ)(χ))h(v̄(χ); y)(ϕ) exp(st) Z(χ) 〉 Ψt ∣∣∣∣∣ =
d∑ ϕ=1 ∣∣∣∣cov(h(v̄(χ); y)(ϕ) exp(st)Z(χ) , v(ϕ)t − v̄(ϕ)(χ) )∣∣∣∣
< mσ2L exp(st)
√ d
(Z(χ))3 by Eq (19)
= m Z(χ) · exp(st) Z(χ)
· L √ dσ2
Z(χ)
= ( 1− exp(st)
Z(χ) ) · exp(st) Z(χ) · L √ dσ2 Z(χ)
≤L √ dσ2
4Z(χ) ≤L √ dσ2
4 . The last line is due to Z(χ) > ∑ w∈χ\{t} exp(sw) = m ≥ 1. | 1. What is the focus of the paper regarding theoretical insight into attention models?
2. What are the strengths of the proposed approach, particularly in its novel closed-form relationship?
3. What are the weaknesses of the paper, especially regarding its assumptions and limitations?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. What are some specific questions the reviewer has regarding the paper, such as notation confusion and typos? | Review | Review
This paper provides theoretical insight into the mechanisms by which a simplified attention model trained with gradient descent learns to allocate more mass to relevant words in the input.
In a limited toy setting, the authors derive a closed form relationship between word-score and word embedding norm in a simplified, one layer attention model. The theoretical findings are verified empirically both on the toy task and on a more realistic sentiment classification benchmark. Due to the extreme simplicity of the setting considered, as well as the number of assumptions made, it is unclear to me what to make of these results. In particular, it seems that the setting considered (fixed query attention over bag of word embeddings) is very different from real use cases of attention.
Pros
The closed-form relationship between attention score and embedding norm during SGD training is novel as far as I know
The theoretical results are well justified in experiments: in particular the predicted "SEN" relationship seems to match the prediction.
Cons
Large number of assumptions, the validity of which is unclear in practice: in particular
The assumption that the query vector is (1) a parameter and not a function of the inputs (as in self attention or cross-sentence attention) and (2) is fixed. I don't know of many "real world" attention networks that work this way, after all one of the main appeals of attention is its "content-based" nature
Assumption 1 that the score and embeddings of non-topic words don't change during training. First, this seems like something that could be proven from the earlier assumption that the topic words are updated more frequently. And second it is unclear if it holds for a real task (and a different model where eg. the attention layer attends to higher layers rather than just the embeddings)
Confusing notation makes the paper hard to follow (see remarks for examples)
Unclear takeaway: what does this paper tell us about attention as it is used in practice?
Remarks
5.1: "The “negative effect” cannot last long because the topic word embedding moves along the gradient much faster than the non-topic words due to its high occurrence rate": This is true in the toy example in the paper, but is this the case in practice? For instance in sentiment classifications there are many words to describe sentiment that are infrequent (cue Zipf's law). Moreover, in realistic settings there will be non-topic words which appear very frequently (stop words such as "the", "a" in English).
Lemma 1: while it is true that fixing q doesn't change the capacity of the model, it will definitely change its training dynamics (which is very much the theme of the paper as per the title). How important is it to fix q from this perspective?
A lot of the math would be easier to read if the dependence of some variables (\hat v, Z,...) on a specific sentence was marked explicitly (eg. Z_S instead of Z)
The notation in Lemma 2 was extremely confusing to me, due to the sudden introduction of the bracket notation and the awkward spacing with both equations on the same line. I would recommend at least putting both on separate line, and also reorganizing so that the LHS of the second equation is only ds_i/dt (move the mean to the other side)
In 3. I think using "\mathbf R" for the dictionary is unfortunate (too similar to \mathbb R). Overall I found the separation between topic and non-topic words dictionaries confusing. Why not have a global Vocabulary V, a set of topic words T and refer to the remaining words as V\T?
In 2. "[Hahn and Brunner] show theoretically that the self-attention blocks are severely limited in their computational capabilities when the input sequence is long, which implies that self-attention may not provide the interpretability that one expects.": can you clarify this sentence? Limitation in computational capabilities does not seem to entail limited interpretability in general (see linear models for instance).
Typo in citations in 2.: "Hahn (Hahn, 2020) and Brunner (Brunner et al., 2019)" -> "Hahn (2020) and Brunner et al. (2019)"
Typo in 3. "The training set [...] are" -> "The training set [...] is"
Post Rebuttal
In my review, the main concerns were (1) validity of assumptions, (2) confusing writing/notation and (3) unclear takeaway. The rebuttal appropriately addressed (1), although I am looking forward to the revision to see how this is discussed in the paper itself. I cannot really say anything about any improvements on the writing (2) without seeing the revision, but I am confident that the authors can address most of the issues pointed out by myself and other reviewers. Regarding (3), unclear takeaway, after reading the authors' response as well as the other reviews, my concerns are somewhat assuaged (partly because the assumptions were addressed better), although I am still unsure how or if the results in this paper could be expanded to realistic attention models.
There are additional issues I raised during the discussion (general lack of citations in particular), however this can be fixed fairly easily for the camera ready so I am willing to give the benefit of doubt and raise my score to 6 (borderline accept) |
ICLR | Title
On the Dynamics of Training Attention Models
Abstract
The attention mechanism has been widely used in deep neural networks as a model component. By now, it has become a critical building block in many state-of-the-art natural language models. Despite its great success established empirically, the working mechanism of attention has not been investigated at a sufficient theoretical depth to date. In this paper, we set up a simple text classification task and study the dynamics of training a simple attention-based classification model using gradient descent. In this setting, we show that, for the discriminative words that the model should attend to, a persisting identity exists relating its embedding and the inner product of its key and the query. This allows us to prove that training must converge to attending to the discriminative words when the attention output is classified by a linear classifier. Experiments are performed, which validate our theoretical analysis and provide further insights.
1 INTRODUCTION
Attention-based neural networks have been broadly adopted in many natural language models for machine translation (Bahdanau et al., 2014; Luong et al., 2015), sentiment classification (Wang et al., 2016), image caption generation (Xu et al., 2015), and the unsupervised representation learning (Devlin et al., 2019), etc. Particularly in the powerful transformers (Vaswani et al., 2017), attention is its key ingredient.
Despite its great successes established empirically, the working mechanism of attention has not been well understood (see Section 2). This paper sets up a simple text classification task and considers a basic neural network model with the most straightforward attention mechanism. We study the model’s training trajectory to understand why attention can attend to the discriminative words (referred to as the topic words). More specifically, in this task, each sentence is treated as a bag of words, and its class label, or topic, is indicated by a topic word. The model we consider involves a basic attention mechanism, which creates weighting factors to combine the word embedding vectors into a “context vector”; the context vector is then passed to a classifier.
In this setting, we prove a closed-form relationship between the topic word embedding norm and the inner product of its key and the query, referred to as the “score”, during gradient-descent training. It is particularly remarkable that this relationship holds irrespective of the classifier architecture or configuration. This relationship suggests the existence of a “synergy” in the amplification of the topic word score and its word embedding; that is, the growths of the two quantities promote each other. This, in turn, allows the topic word embedding to stand out rapidly in the context vector during training. Moreover, when the model takes a fixed linear classifier, this relationship allows rigorous proofs of this “mutual promotion” phenomenon and the convergence of training to the topic words.
Our theoretical results and their implications are corroborated by experiments performed on a synthetic dataset and real-world datasets. Additional insights are also obtained from these experiments. For example, low-capacity classifiers tend to give stronger training signals to the attention module. The “mutual promotion” effect implied by the discovered relationship can also exhibit itself as “mutual suppression” in the early training phase. Furthermore, in the real-world datasets, where perfect
delimitation of topic and non-topic words does not exist, interesting training dynamics is observed. Due to length constraints, all proofs are presented in Appendix.
2 RELATED WORKS
Since 2019, a series of works have been published to understand the working and behaviour of attention. One focus of these works pertains to understanding whether an attention mechanism can provide meaningful explanations (Michel et al., 2019; Voita et al., 2019; Jain & Wallace, 2019; Wiegreffe & Pinter, 2019; Serrano & Smith, 2020; Vashishth et al., 2020). Most of these works are empirical in nature, for example, by analyzing the behaviours of a well-trained attention-based model (Clark et al., 2019), or observing the impact of altering the output weights of the attention module or pruning a few heads (Michel et al., 2019; Voita et al., 2019), or a combination of them (Jain & Wallace, 2019; Vashishth et al., 2020). Apart from acquiring insights from experiments, Brunner et al. (2019) and Hahn (2020) show theoretically that the self-attention blocks lacks identifiability, where multiple weight configurations may give equally good end predictions. The non-uniqueness of the attention weights therefore makes the architecture lack interpretability.
As a fully connected neural network with infinite width can be seen as a Gaussian process (Lee et al., 2018), a few works apply this perspective to understanding attention with infinite number of heads and infinite width of the network layers (Yang, 2019; Hron et al., 2020). In this paper, we restrict our study to the more realist non-asymptotic regime.
3 PROBLEM SETUP
Learning Task To obtain insights into the training dynamics of attention models, we set up a simple topic classification task. Each input sentence contains m non-topic words and one topic word indicating its topic. Note that a topic may have multiple topic words, but a sentence is assumed to include only one of them. Assume that there are J topics that correspond to the mutually exclusive topic word sets T1, T2, · · · , TJ . Let T = ⋃J j=1 Tj be the set of all topic words. The non-topic words are drawn from a dictionary Θ, which are assumed not to contain any topic word.
The training set Ψ consists of sentence-topic pairs, where each pair (χ, y) is generated by (1) randomly pick a topic y ∈ {1, 2, · · · , J} (2) pick a topic word from set Ty and combine it with m words drawn uniformly at random from Θ to generate the sentence (or the bag of words) χ. In this task, one aims to develop a classifier from the training set that predicts the topic y for a random sentence χ generated in this way.
We will consider the case that |Θ| >> |T|, which implies that a topic word appears much more frequently in the sentences than a non-topic word.
Attention Model For this task, we consider a simple attention mechanism similar to the one proposed by Wang et al. (2016). Each word w is associated with two parameters: an embedding νw ∈ Rd and a key κw ∈ Rd ′ . Based on a global query q ∈ Rd′ , the context vector of sentence χ is computed by
ν̄(χ) = ∑ w∈χ νw exp(qTκw) Z(χ) , where Z(χ) = ∑ w′∈χ exp(q
Tκw′). Then ν̄(χ) is fed into a classifier that predicts the sentence’s topic in terms of a distribution over all topics.1
Denote the loss function by l(χ, y). Our upcoming analysis implies this attention model, although simple, may capture plenty of insight in understanding the training of more general attention models.
Problem Statement Our objective is to investigate the training dynamics, under gradient descent, of this attention model. In particular, we wish to understand if there is an intrinsic mechanism that allows the attention model to discover the topic word and accelerates training. Moreover, we wish to investigate, beyond this setup, how the model is optimized when there is no clear delimitation between topic and non-topic words, as in real-world data.
1The condition that the attention layer directly attends to the word embeddings merely serves to simplify the analysis in Section 4 but this condition is not required for most results presented in Sections 4 and 5. More discussions are given in Appendix A in this regard.
4 THEORETICAL ANALYSIS
It is common to fix some parameters when we train a model with limited resources. Also Lemma 1. Assume q 6= 0 when initialized. Fixing it does not affect the attention block’s capacity.
Thus, our upcoming discussion focuses on the case in which the query is fixed. Doing so also allows us to establish a closed-form expression connecting the word’s embedding and the inner product of its key and the query. In Appendix B, extra discussions and experimental results reveal that the trainability of the query does not affect the fundamental relationship we are about to present.
For a topic word t, let Ψt denote the training samples involving it. Then, by gradient descent,
∆νt = τ |Ψ| ∑
(χ,y)∈Ψt
∇ν̄(χ)l(χ, y) exp(qTκt)
Z(χ) (1)
∆κt = τ |Ψ| ∑
(χ,y)∈Ψt
q(νt − ν̄(χ))T ∇ν̄(χ)l(χ, y) exp(qTκt)
Z(χ) , (2)
where τ denote the learning rate. As it will turn out, an important quantity in this setting is the inner product qT kw of query q and the key kw, which we denote by sw, and refer to it as the score of the word w.
Denoting vw = ||q||2νw, η = τ ||q||2, v̄(χ) = ∑ w∈χ exp(sw) Z vw, and h(v̄(χ); y) = ∇ν̄(χ)l(χ, y), for a topic word t, the dynamics simplifies to
∆vt = η |Ψ| ∑
(χ,y)∈Ψt
h(v̄(χ); y) exp(st)
Z(χ) (3)
∆st = η |Ψ| ∑
(χ,y)∈Ψt
(vt − v̄(χ))T h(v̄(χ); y) exp(st)
Z(χ) . (4)
In the rest of the paper, whenever we refer to the embedding of word t, we actually mean vt not νt.
Our analysis assumes the word embeddings are sampled i.i.d. from a distribution with mean zero and variance σ 2
d , where σ 2 is assumed close to zero. The word keys and the query are also sampled from
zero mean distributions with a possibly different variance. We assume that this variance is so small that the initial word scores are approximately zero. This assumption of the initial configurations corresponds to the attention model starting as a word-averaging model, and allows us to investigate how the model deviates from this initial setting with training. We also assume the derivative h(v̄(χ); y) of ` is Lipschitz continuous in v̄(χ) throughout training. Further the assumption in Section 3 that the number of non-topic words |Θ| is much larger than the number of topic words |T| implies that with a sufficient number of training samples, the occurrence rate of a topic word is significantly higher than the non-topic ones. This then justifies the following assumption we will use throughout our analysis. Assumption 1. The scores and the embeddings of the non-topic words are nearly unchanged compared to their counterparts for the topic words.
Hence, our upcoming analysis will treat the scores and embeddings of the non-topic words as constants. Assumption 1 will be validated by experimental results presented in Section 5.
By selecting a sufficiently small η, we can take the gradient-descent updates in Eq (3) and Eq (4) to its continuous-time limit and get2
dvt dt = η |Ψ| ∑
(χ,y)∈Ψt
h(v̄(χ); y) exp(st)
Z(χ) (5)
dst dt = η |Ψ| ∑
(χ,y)∈Ψt
(vt − v̄(χ))T h(v̄(χ); y) exp(st)
Z(χ) . (6)
2Reversely, Eq (3) is a discretized approximation of Eq (5): vt(t+1)−vt(t) = ∫ t+1 t dvt(t ′) dt′ dt ′ ≈ 1· dvt(t) dt = ∆vt(t) . The approximation becomes accurate if vt(t + 1) is close to vt(t), which can be achieved by choosing a sufficiently small η. Likewise, Eq (4) is a discretized approximation of Eq (6).
We can then characterize the update of the score and the embedding of a topic word as a continuoustime dynamical system stated in Lemma 2. The same technique has been used to analyze the training of neural networks in other contexts (Saxe et al., 2014; Greydanus et al., 2019).
Lemma 2. For sufficiently small η and σ2, the score st and embedding vt of topic word t satisfy
dvt dt = η|Ψt| |Ψ|
〈 h(v̄(χ); y) exp(st)
Z(χ) 〉 Ψt , (7)
dst dt = [( vt − 〈v̄(χ \ t)〉Ψt )T dvt dt ]〈 exp(st) + Z(χ \ t) Z(χ \ t) 〉−1 Ψt , (8)
where Z(χ \ t) = ∑w∈χ\{t} exp(sw), v̄(χ \ t) = ∑w∈χ\{t} vw exp(sw)Z(χ\t) , and 〈 · 〉Ψt denotes taking sample mean over the set Ψt.
Eq (7) implies the speed of moving vt along the direction of 〈h(v̄(χ); y)〉Ψt is controlled by the attention weight exp(st)Z(χ) . Eq (8) shows that vt increases if and only if vt has a greater projection on 〈h(v̄(χ); y)〉Ψt than the weighted average of the non-topic word counterparts. Consider a simplified case where 〈h(v̄(χ); y)〉Ψt is fixed. Since the change of vt is much faster than the non-topic word counterparts, vt will have a larger projection on 〈h(v̄(χ); y)〉Ψt after a few epochs of training. Then st increases as well as its attention weight, which in turn speeds up the extension of the embedding vt. This observation reveals a mutual enhancement effect between the score increment and the embedding elongation. In fact such an effect exists in general, as stated in the theorem below, irrespective of whether 〈h(v̄(χ); y)〉Ψt is fixed. Theorem 1. In the setting of Lemma 2, from epoch t0 to t1, the topic word score st and its embedding vt satisfy [
st(t) + exp(st(t))
〈 1
Z(χ \ t) 〉 Ψt ]t1 t0 = [ 1 2 ||vt(t)− 〈v̄(χ \ t)〉Ψt || 2 2 ]t1 t0 . (9)
Following from Lemma 2, this theorem implies a positive relationship between the topic word score and the distance between vt and the non-topic word embedding average 〈v̄(χ \ t)〉Ψt . Remarkably this result makes no reference to 〈h(v̄(χ); y)〉Ψt , hence independent of it. This implies the identity in Eq (9) holds irrespective of the choice and setting of the classifier. Theorem 1 further implies a score and embedding norm (“SEN” in short) relationship for the topic words:
Corollary 1. In the context of Theorem 1, by setting t0 = 0 and t1 = t, Eq (9) is reduced to ||vt(t)||2 = √ 2 ( st(t) + exp st(t)
m − 1 m
) , (10)
The corollary indicates that ||vt(t)||2 is monotonically increasing with st(t). So, st increases if and only if the point vt departs from its initial location. That is, if the norm of the topic word embedding increases, it will be attended to. This result is independent of the configuration of all other network layers. Thus, if 〈h(v̄(χ); y)〉Ψt has a gradient field that pushes vt away from its original location, the topic word is expected to be attended to. This statement can be made precise, as in Theorem 2, when the model uses a linear classifier.
Theorem 2. Assume the model has a fixed classifier in the form c(v̄(χ)) = softmax(UT v̄(χ)), where the columns of U are linearly independent, and the model is trained using gradient descent with the cross-entropy loss. As training proceeds, the model will attend to the topic word in every input sentence and have its training loss approach zero.
It is notable that the theorem holds broadly for any arbitrary fixed linear classifier (subjective to the mild linear independence constraint of its parameter U ). Additionally, we anticipate that this result holds for a much wider family of classifiers including trainable and even nonlinear ones. But rigorous proof appears difficult to obtain in such settings, and we will corroborate this claim in an experimental study in Section 5.
To sum up, in this section, we have shown two main results: (a) there is a closed-form positive relationship, the SEN relationship, between the topic word score and its embedding norm, which is independent of the configuration of the classifier. (b) the model, equipped with a fixed linear classifier stated in Theorem 2, can be trained to have all topic words attended to.
5 EXPERIMENTAL STUDY
In this section, we first test our model on an artificial dataset generated through the procedure introduced in Section 3. The test corroborates our theoretical results and validates their assumptions. Our test results suggest that the attention mechanism introduces a synergy between the embedding and the score of topic words.
Another experiment is performed on the real datasets SST2 and SST5 (Socher et al., 2013). The experiment results suggest that the SEN relationship of topic words holds at least in initial training stages. As training proceeds, some words appear to deviate from the theoretical trajectories. Further analysis of this behaviour provides additional insights into the attention model’s training dynamics on real-world datasets, often possessing a much more complex structure as well as rich noise. We performed all experiments using PyTorch (Paszke et al., 2017).
We performed our experiments on three models, Attn-FC, Attn-TC and Attn-TL, having the same attention block but different classifiers. The first two have the classifier in form c(v̄(χ)) = softmax(UT v̄(χ)) and the last in form c(v̄(χ)) = softmax(UT2 ReLu(U T 1 v̄(χ)+b1)+b2). Except that the U in Attn-FC is fixed, other parameters of the three models are trainable and optimized using the cross-entropy loss.
Since a real-world dataset does not have a topic word as the sentence topic indicator, we introduce a word “topic purity” measurement to facilitate our discussion on the experiments performed on SST2. Let δ+w and δ
−(w) respectively denote the portions of the positive and negative sentences among all training samples containing word w. Then the topic purity of w is δ(w) = |δ+(w)− δ−(w)|. If δ(w) = 1, w is either a pure positive or negative topic word. If δ(w) = 0, δ+(w) = δ−(w) = 0.5, which implies w has a completely random topic correspondence.
5.1 EXPERIMENTS ON SYNTHETIC DATASETS
The artificial dataset, consisting of 800 training and 200 test samples, is generated through the procedure introduced in Section 3. The dataset has four topics, and each contains two topic words. There are 20 non-topic words per sentence and the non-topic word dictionary size M = 5, 000.
Our experiments use the same embedding dimension 15 for all three models. Regarding the classifiers, Attn-FC and Attn-TC adopt U ∈ R15×4, while Attn-TL takes U1 ∈ R15×10, b1 ∈ R10, U2 ∈ R10×4 and b2 ∈ R4. For the validation of Theorem 2, the columns of U in Attn-FC are set to be orthonormal and thus linear independent. Unless otherwise stated, the scores are set to zero and the embeddings are initialized by a normal distribution with mean zero and variance σ 2
d = 10 −6. We trained the
models using gradient descent with learning rate η = 0.1 for 5K epochs before measuring their prediction accuracy on the test samples. When training is completed, all three models achieve the training loss close to zero and the 100.0% test accuracy, which implies the trained models perfectly explain the training set’s variations and have a good generalization on the test set.
Verification of Assumption 1 and validation of Corollary 1 and Theorem 2. We repeated the experiments for five runs and plotted the empirical score distributions of the non-topic and topic words of the three well-trained models with their 95% confidence intervals in the first two graphs of Fig 1. Compared to the topic words, the scores of the non-topic words are nearly unchanged throughout the entire training process. Likewise, the next two plots show the embedding norms of the non-topic words are nearly constant, too. This implies Assumption 1 indeed holds. Fig 2 plots the empirical and the theoretical SEN curves of a randomly picked topic word for the three models, where the theoretical curve has the expression stated in Eq (10). The coincidence of the empirical and the theoretical curves in all three models validates the SEN relationship stated in Eq (10) and
its independence of the later layers.3 Moreover, Fig 1 (left two) shows the scores of the topic words exceed the non-topic word counterparts by two orders of magnitude, which implies the topic words are attended to in a well-trained model. As we have reported earlier, Attn-FC has the training loss roughly zero when the training is completed, Theorem 2 is confirmed.
Lower-capacity classifiers result in stronger attention effects. The comparison, among the SEN distributions of the topic words in the three models, implies that a lower-capacity classifier leads to greater topic word SEN, which means a more drastic attention decision. This happens because the classifier of a larger capacity can explain more variations of the sample distribution and has more freedom to accommodate and absorb the correcting gradient signals. As a result, the attention layer receives a weaker gradient on average, which makes the embeddings of the topic word extend less from the original point. Thus, as implied by Eq (10), the magnitudes of the scores of the topic words are dampened, and therefore a weaker attention effect will be expected. This observation hints that if we know attending to the right words can explain most of the variations in sample distributions, we should consider a low capacity classifier (or later layers in general). Alternatively, we may also freeze the classifier in the initial stage of a training process, forcing the attention layer to explain more variations. Remarkably, all these modifications do not affect the relationship stated in Eq (10).
Synergy between score growth and embedding elongation in topic words. Eq (10) implies a positive SEN relationship of the topic word. That is, a larger topic word embedding norm results in a larger score (and thus a larger attention weight), which in turn makes the embedding extend faster. To corroborate this claim, we performed an ablation study by considering two variants of Attn-TC. The first has the scores frozen (referred as Attn-TC-KF) and the second has the embeddings fixed (referred as Attn-TC-EF). In this experiment, the embeddings of the models are initialized by a normal distribution of mean zero and variance σ 2
d = 0.1. We trained all three models by gradient descent for 60K epochs with learning rate η = 0.1, which are sufficient for all three models to fully converge. All three trained models reached 100% test accuracy.
3In Appendix B, the experiments with trainable queries are also implemented. The results indicate that the trainability of queries do not affect the positive SEN relationship. Besides, the query fixed model has very similar training dynamics to the one with a trainable query and a large initial norm.
The first three graphs of Fig 3 describe the evolution of the three models in the first 3K training epochs. For a randomly picked topic word, the first plot shows its score in Attn-TC grows faster than the one in Attn-TC-EF. Note that the score in Attn-TC-EF finally surpasses the one in Attn-TC because Attn-TC has converged at around 1K epochs. Likewise, the word embedding norm in Attn-TC increases more rapidly than the one in Attn-TC-KF before Attn-TC converges. The observations imply the attention introduces a mutual enhancement effect on training the topic word’s score and its embedding, which makes Attn-TC enjoy the fastest training loss drop as shown in the third plot.
The mutual enhancement could become the mutual diminution in the early training stage if the initial embedding of the topic word has a negative projection on the direction that it will move along. This effect can be precisely characterized by Eq (8). Assume the embedding of a topic word is initialized to have a smaller projection on the gradient, passed from the classifier, than the average of the non-topic words. The reversed order of the projections makes the score of the topic word decrease as its embedding has a “negative effect” on the training loss compared to the average of the non-topic word embeddings. This will, in turn, impede the elongation of the embedding vector or even make it shrink (see the last two plots of Fig 3). The “negative effect” cannot last long because the topic word embedding moves along the gradient much faster than the non-topic words due to its high occurrence rate in the training samples. By Eq (8) again, dstdt will finally become positive. That is, the score of the topic word starts to increase and its attention weight will surpass the one of the word-averaging model (see the second last plot of Fig 3). Then, we start to observe the mutual enhancement effect, which is indicated in the last plot: the increase speed of the Attn-TC’s embedding norm exceeds the Attn-TC-KF’s since around the 370-th epoch.
5.2 EXPERIMENTS ON SST2 AND SST5
The second part of the experiment is performed on datasets SST2 and SST5, which contain movie comments and ratings (positive or negative in SST2 and one to five stars in SST5). For simplicity, we limit our discussion on Attn-FC and Attn-TC using the same configurations of our previous experiments except that the embedding dimension is set to 200. Remark that our goal is not to find a state-of-the-art algorithm but to verify our theoretical results and further investigate how an attention-based network works.
For both SST2 and SST5, we trained the two models by gradient descent with learning rate η = 0.1 combined with the early stopping technique (Prechelt, 2012) of patience 100. As PyTorch requires equal length sentences in a batch, we pad all the sentences to the same length and set the score of the padding symbol to the negative infinity. Under this configuration, the trained Attn-FC and Attn-TC reached 76.68% and 79.59% test accuracy on SST2 and 38.49% and 40.53% on SST5.
Validation of Corollary 1. As the true topic words are unknown in a real dataset, we checked the words of the largest fifty scores after the training is completed. We observed that most of the words have their SEN curves close to our theoretical prediction. We picked two words for each model-dataset combination and plotted the curves with their theoretical counterparts in Fig 4.
The competition of two topic word candidates of various occurrence rates and topic purity. To better understand how the attention block works, we investigated the case when the empirical and the theoretical SEN curves disagree. We limit our discussion on the model trained on SST2.
For both Attn-FC and Attn-TC, we noticed that there are mainly two types of deviations, as shown in the first two columns of Fig 5. In the first one, the theoretical curves overestimate the score growth of powerful in terms of the embedding norm, while best shown in the second column experiences a score drop combined with a failed theoretical prediction in the final training stage. These two types of the disagreement in fact occur in pairs, which is caused by a pair of words that one of them frequently appears in the training samples but has a low topic purity, while the other has the opposite.
Regarding the (powerful, best) pair, best appears in 128 training samples while 103 of them are positive. In comparison, powerful appears in the 36 training samples, and all of them are positive. The large difference in the number of samples makes the embedding norm of best extend much faster than powerful in the initial training process (Fig 5, third column). As the positive SEN relationship implies, a quicker embedding norm increase of best results in its faster score growth. Therefore, best will be more attended to ( Fig 5, last column), which thus further accelerates its SEN increase (this is the synergy relationship between the embedding and score that has been demonstrated in the ‘ablation’ study). This process does not stop until the gradient from the classifier diminishes because of its low topic purity (which is similar to the case when the label smoothing (Szegedy et al., 2016) is applied for alleviating the overfitting problem). In contrast, powerful is less trained initially due to its low occurrence rate. But its high topic purity makes the direction of the gradient stable, in which its embedding will steadily elongate. The elongation will, at last, let the embedding have a greater projection on the gradient vector than the average of the other words appearing in the same sentence. Thus, the score of powerful starts to increase as shown by Eq (8) and plotted in the second last column of Fig 5. In contrast, as the gradient magnitude drops, the embedding of best will extend in a decreasing speed; and its projection on the gradient, passed from the classifier, will finally be surpassed by the words co-occurring with it but having a higher topic purity (like powerful). Thus, its score starts to drop eventually.
The dynamics of topic purity of attended words The analysis of the inconsistency between the empirical and theoretical SEN curves hints that the disagreement are strongly related to the word’s topic purity and its occurrence rate. To better characterize their dynamics, for a word w, let Aw(t)
be the list, at epoch t, that records the attended word (or of the largest score) in the sentences containing w. If multiple words in a sentence have the same score, randomly pick one of them. Note that Aw(t) may contain repeating words as a word could be attended in multiple sentences. We selected w to be the words having the largest five scores in the well-trained Attn-FC and Attn-TC, respectively. At various epoch t, Fig 6 plots how the average of topic purity δ(w′) evolves for w′ ∈ Aw(t) as well as the average number of occurrence in the training samples. For both models, they initially attend to the words that mostly have low topic purity with a high occurrence rate. As the training proceeds, the average topic purity of the attended words increases while the average occurrence rate drops. At the end of the training, almost all the attended words have a close-to-one topic purity. Fig 7 shows the evolution of the average topic purity and the average occurrence rate of the attended words over the entire training set. While a similar changing pattern can be observed, the average topic purity is lower than the one presented in Fig 6 when the training is completed. We argue that this happens because some sentences do not have any high topic purity words or their high topic purity words have a too low occurrence rate to be sufficiently trained.
6 CONCLUSION
This paper investigated the dynamic of training a series of attention-based bag-of-word classifiers on a simple artificial topic classification task. We have shown a persisting closed-form positive SEN relationship for the word to which the model should attend to. This result is independent of the configurations of the later layers. Through the result, we have proved that the model must converge in attending to the topic word with the training loss close to zero if the output of the attention layer is fed into a fixed linear classifier. A list of experiments has confirmed these results.
The experimental results indicate that the attention block intends to make a more drastic decision if its later layers have a lower capacity. This hints the classifier’s limited capacity may help if “selecting” the right word explains most of the variations in the training samples. An ablation study shows a synergy between the topic word score’s growth and its embedding elongation, leading to a faster training loss drop than the fixed score and fixed embedding variants. Besides, we have shown that this “mutual promotion” effect can also exhibit itself as “mutual suppression” in the initial training stage.
We investigated the competition of two topic word candidates with large differences in the topic purity and the occurrence rate in the training samples. The words of a higher occurrence rate but possibly low topic purity are more likely to be attended to initially. However, as the training proceeds, the attended words are gradually replaced by those of higher topic purity.
A THE RESULTS OF THE MODEL THAT THE ATTENTION BLOCK ATTENDS TO A CNN LAYER
The main text focuses on a model that has the attention block directly attends to word embeddings. Such design simplifies our analysis but should not be considered as a pre-condition to keep our results valid. In particular, the positive SEN relationship generally holds regardless of other components of the network, which can be justified as follows. There are two correlated sources of gradient signals, one back-propagates from the score to update key and query, the other back-propagates from the classifier loss to update word embedding. The correlation governs the positive relationship between score and embedding norm. Although the integration of other modules makes the analysis harder, the relationship should persist. Hence, all the results depending on this relationship keep valid.
We empirically verify our claims by implementing an experiment similar to the one discussed in the main text. We modified Attn-TC (named Attn-TC-CNN) by adding two parallel CNN layers to respectively process the word embeddings and the keys before feeding them into the attention block. Then, we construct an analogous data generation process introduced in Section 3. Finally, we empirically show that Assumption 1 and the positive SEN relationship still hold.
The Attn-TC-CNN has the same configurations as the Attn-TC (introduced in Section 5.1) except that we added two parallel CNN layers to preprocess the word embeddings and the keys. Regarding the CNN layer processing the word embeddings, it has the kernel of size d × 2 and stride 1. We used d kernels to keep the word embedding dimension unchanged. So, for two consecutive words in a sentence, the CNN mixes their embeddings and produce a new one. Given that the sentence has m+ 1 words, the input embedding matrix has shape d× (m+ 1) and the output has shape d×m. Likewise, regarding the keys, the CNN layer processing them has the kernel size d′ × 2 and stride 1. And there are d′ kernels in total.
Consider a classification problem containing two topics A and B. The sentences of the two topics are generated by two Markov chains, respectively, which are constructed as follows:
1. Let Li (i = A,B) be two mutually exclusive sets of ordered word pairs. The word pairs do not contain repeating words.
2. For i = A,B:
(a) Initialize Markov Chain (MCi) of words in the dictionary such that from a word, there is a uniform probability of moving to any words (including itself) in one step.
(b) Group the word pairs in Li according to the leading words. For each group, let s denote the shared leading word and ei (i = 1, 2, · · · , n) the second. Set the probability of moving from s to ei in one step be 1n and those to the rest zero.
We call the word pairs in Li (i = A,B) the topic word pairs. For each pair of words, a new embedding and a new key are generated by feeding their original embeddings and keys into the CNNs. We refer to the new embedding as the topic word-pair embedding and the new key as the topic word-pair keys.
Likewise, for any other word pairs, the new generated embeddings and keys are referred to as the non-topic word-pair embeddings and keys.
Assume the training dataset contains sentences of length m + 1. We generate it by repeating the following procedure:
1. uniformly sample a topic i ∈ {A,B}. 2. uniformly sample a word pair from Li and let the first word be the starting word.
3. sample another m words by running the Markov process for m times.
In our experiments, the word dictionary has 200 words, and so there are 40, 000 word pairs. The sentence length is set to 10. For i = A,B, each Li contains two randomly picked word pairs as the topic word pairs. Note that we have ensured that LA and LB are mutually exclusive. We used 4, 000 samples for training and 1, 000 for testing. The model was trained for 3, 000 epochs and achieved 99.89% test accuracy.
We repeated the experiments for ten runs and plotted the distributions along with the 95% confidence interval of the scores and the embedding norms of the topic and the non-topic word pairs in Fig 8. Note that, the word pair embeddings and the keys are generated from the CNN layers. So the initial embeddings and the scores are not very closed to the origin. To facilitate the verification of Assumption 1, we centered them by subtracting their initial values before we plot their distributions. From Fig 8, we observe that the non-topic word pair scores and the embeddings are nearly unchanged in comparison to their counterparts of the topic ones. Therefore, we have shown that Assumption 1 is largely held even we have added CNN layers to process the word embeddings and the keys before feeding them into the attention block.
Randomly picking a topic word pair, we plotted its empirical and theoretical SEN curves in ten runs in Fig 9. The figure shows that the positive SEN relationship holds even if the attention layer attends to other layers instead of the word embeddings directly. Therefore, all the results due to the relationship keep valid.
B DISCUSSION AND EXPERIMENTAL RESULTS OF THE MODELS WITH A TRAINABLE QUERY
In Section 4, we assumed a fixed and non-trainable query which allows the derivation of a clean closed-form "SEN" relation. But it is worth emphasizing that the positive relationship between the score and the embedding norm in fact exists regardless of the trainability of the query. As we have mentioned in Appendix A, there are two correlated sources of gradient signals, one back-propagates from the score to update key and query, the other back-propagates from the classifier loss to update word embedding. This correlation governs the positive relationship between score and embedding norm. Whether the query is trainable does not alter the existence of this correlation, although a trainable query makes the analysis more difficult. In particular, when the query norm is large, the update of query is relatively negligible; thence, the training behaviour is similar to having a fixed query.
To verify our claims, we reimplemented the experiments introduced in Section 5.1 with the same configurations except that the query is trainable. In Fig 10, we plot the empirical and the theoretical SEN curves of a randomly picked topic word by training Attn-FC, Attn-TC and Attn-FC with a trainable query. We initialize the entries of the query vector by a normal distribution N(0, σ2). From left to right, σ2 increases as well as the initial query norm. We observe that regardless of how the query is initialized, the positive SEN relationship always preserves. Moreover, as the initial norm increases, the empirical curve approaches the theoretical curve having the expression in Eq (10). As we have discussed, this asymptotic approach happens since an increasing initial norm of the query makes its change negligible during the training process compared to its already big enough norm.
C PROOFS OF THE RESULTS IN SECTION 4
Proof of Lemma 1. Assume there are in total N words in the dictionary (including both the topic and the non-topic words). Sample the keys of the N words arbitrarily to generate K ∈ Rd′×N with the key vectors as its columns. Randomly pick q ∈ Rd′ as a query. To prove the lemma, it is sufficient to show for any non-zero q̃ ∈ Rd′ , there exists K̃ ∈ Rd′×N such that qTK = q̃T K̃. Since q̃ 6= 0, without loss of generality, assume its first entry q̃1 is non-zero. Let S = [s1, s2, · · · sN ] = qTK. For i = 1, 2, · · ·N , let the i-th column of K̃ be [ siq̃1 , 0, · · · , 0]
T . Then we can easily check that qTK = q̃T K̃.
Proof of Lemma 2. Picking a sufficiently small η, a continuous time limit can be taken to obtain the dynamics,
dvt dt = η |Ψ| ∑
(χ,y)∈Ψt
h(v̄(χ); y) exp(st)
Z(χ) , (11)
dst dt = η |Ψ| ∑
(χ,y)∈Ψt
(vt − v̄(χ))T h(v̄(χ); y) exp(st)
Z(χ) . (12)
which are equivalent to
dvt dt = η|Ψt| |Ψ|
〈 h(v̄(χ); y) exp(st)
Z(χ) 〉 Ψt , (13)
dst dt = η|Ψt| |Ψ|
〈 (vt − v̄(χ))T h(v̄(χ); y) exp(st)
Z(χ) 〉 Ψt . (14)
As h(v̄(χ); y) is assumed Lipschitz continuous, we have4
∣∣∣∣∣〈vt − v̄(χ)〉TΨt 〈 h(v̄(χ); y) exp(vt) Z(χ) 〉 Ψt − 〈 (vt − v̄(χ))Th(v̄(χ); y) exp(vt) Z(χ) 〉 Ψt ∣∣∣∣∣ < L √ dσ2 4 ,
(15) where L is the Lipschitz constant. Choosing a small enough σ2 so that L √ dσ2 is close to zero, we have
〈vt − v̄(χ)〉TΨt 〈 h(v̄(χ); y) exp(vt)
Z(χ) 〉 Ψt ≈ 〈 (vt − v̄(χ))Th(v̄(χ); y) exp(vt) Z(χ) 〉 Ψt .
Then, combining it with Eq (14) yields
dst dt = η|Ψt| |Ψ|
〈 (vt − v̄(χ))T h(v̄(χ); y) exp(st)
Z(χ) 〉 Ψt
≈ 〈vt − v̄(χ)〉TΨt η|Ψt| |Ψ|
〈 h(v̄(χ); y) exp(st)
Z(χ) 〉 Ψt = 〈vt − v̄(χ)〉TΨt dvt dt . (16)
4The derivation is given in Appendix D
Remarkably, the relation stated in Eq (16) is independent of h(v̄(χ); y). So the relationship does not depend on the architecture of the classifier (or later layers in general). Expand Eq (16),
dst dt = 〈vt − v̄(χ)〉TΨt dvt dt
= 〈 vt − exp(st) Z(χ) vt + ∑
w∈χ\{t}
exp(sw)
Z(χ) vw 〉T Ψt dvt dt
= 〈 ∑ w∈χ\{t} exp(sw)(vt − vw) Z(χ) 〉T Ψt dvt dt
= ∑
w∈χ\{t}
〈 exp(sw)
Z(χ) 〉 Ψt 〈vt − vw〉TΨt dvt dt
≈ ∑
w∈χ\{t}
〈exp(sw)/Z(χ \ t)〉Ψt 〈Z(χ)/Z(χ \ t)〉Ψt 〈vt − vw〉TΨt dvt dt .
The second last step is due to the independence between the score and embedding initialization, while the approximation in the last step is made as we assume all the scores of the non-topic words maintain the same during the entire training process. Rearranging the equation yields
dst dt
= ∑
w∈χ\{t}
〈 exp(sw)
Z(χ \ t) 〉 Ψt 〈vt − vw〉TΨt dvt dt 〈 exp(st) + Z(χ \ t) Z(χ \ t) 〉−1 Ψt
= ( vt − 〈v̄(χ \ t)〉Ψt )T dvt dt
〈 exp(st) + Z(χ \ t)
Z(χ \ t)
〉−1 Ψt
where v̄(χ \ t) = ∑w∈χ\{t} vw exp(sw)Z(χ\t) . Proof of Theorem 1. By Lemma 2, we have〈
exp(st) + Z(χ \ t) Z(χ \ t) 〉 Ψt dst dt = ( vt − 〈v̄(χ \ t)〉Ψt )T dvt dt
which is ( 1 + exp(st) 〈 1
Z(χ \ t) 〉 Ψt ) dst dt = ( vt − 〈v̄(χ \ t)〉Ψt )T dvt dt . (17)
By the fundamental theorem of calculus, integrating on both sides from t = t0 to t1 yields,[ st + exp(st) 〈 1
Z(χ \ t) 〉 Ψt ]t1 t0 = [ 1 2 ||vt − 〈v̄(χ \ t)〉Ψt || 2 2 ]t1 t0 . (18)
Proof of Corollary 1. Since the scores and the embeddings of the non-topic words are considered constant, we have 〈 1
Z(χ\t) 〉 Ψt
= 1m and 〈v̄(χ \ t)〉Ψt = 0. As vt is initialized with mean zero and a very small variance, ||vt(0)||22 ≈ 0. Then, Eq (9) can be written as
||vt(t)||2 = √ 2 ( st(t) + exp st(t)
m − 1 m
) .
Proof of Theorem 2 (sketch). Without loss of generality, pick topic word t and assume it corresponds to the ϕ-th topic. We prove the theorem by showing that as the number of epochs increases, for any sentence χ in Ψt, st → ∞ and softmax(UT v̄(χ))→ eϕ, where eϕ is the one-hot vector that the ϕ-th entry equals one and the rest are zeros. Let x = UT v̄(χ). Notice that the loss function 〈− log(softmaxϕ(x))〉Ψt is convex in terms of x. As U is fixed, the loss function is also convex in terms of v̄(χ). This implies, if the model is optimized by gradient descent, the gradient will lead v̄(χ) to its optimal solution v̄∗(χ). In our case, as the columns of U are linearly independent, there exists a vector n that are orthogonal to all the columns of U except Uϕ. Without loss of generality, assume Uϕ · n > 0 (otherwise, choose its inverse). Then a potential optimal solution is v̄∗(χ) = λn for λ goes to the infinity as v̄∗(χ) · Ui = 0 for i 6= ϕ and v̄∗(χ) · Uϕ → ∞, which implies softmax(UT v̄∗(χ)) = eϕ. Combined with the fact that there cannot be an optimal solution v∗∗(χ) of a finite norm such that softmax(UT v̄∗∗(χ)) = eϕ, the gradient must lead v̄(χ) to an optimal solution that is arbitrarily far away from the origin, which also applies to vt as it receives the same gradient up to a multiple according to Eq (7). As ||vt||22 increases unboundedly, by Theorem 1, the score st → ∞ as well. So we have softmax(UT v̄(χ)) → softmax(UT vt)→ eϕ and thus the cross-entropy loss drops to zero.
D THE DERIVATION OF EQ (15)
For a vector u, let u(ϕ) denote its ϕ-th entry. As we ignore the changes of the scores and the embeddings of non-topic words, their distributions maintain the same as the initial ones. In particular, the scores of the non-topic words are always zero. So, for ϕ = 1, 2, · · · d,
var ( v̄(ϕ)(χ) ) = var exp(st) Z(χ) v (ϕ) t + ∑ w∈χ\{t} exp (sw) Z(χ) v(ϕ)w =
∑ w∈χ\{t} ( exp(sw) Z(χ) )2 σ2 d = mσ2 (Z(χ))2d .
Since h(v̄(χ); y) is assumed Lipschitz continuous in v̄(χ), there exists L ∈ R such that for ϕ = 1, 2, · · · d,
|h(v̄(χ1); y)(ϕ) − h(v̄(χ2); y)(ϕ)| ≤ L||v̄(χ1)− v̄(χ2)||1,
where v̄(χ1), v̄(χ2) ∈ Rd and || ||1 denote the l1-distance by taking the sum of the absolute values of the entry differences on each dimension. So we also have∣∣∣∣ 1Lh(v̄(χ1); y)(ϕ) − 1Lh(v̄(χ2); y)(ϕ)
∣∣∣∣ ≤ ||v̄(χ1)− v̄(χ2)||1. According to the work of Bobkov & Houdrè (1996), for ϕ = 1, 2, · · · , d,
var(L−1h(v̄(χ); y)(ϕ)) ≤ ∑d ϕ=1 var ( v̄(ϕ)(χ) ) = mσ2 (Z(χ))2 ,
which is
var(h(v̄(χ); y)(ϕ)) ≤ mσ 2L2
(Z(χ))2 .
Then, the Cauchy-Schwarz inequality implies, for ϕ = 1, 2, · · · , d,∣∣∣∣cov(h(v̄(χ); y)(ϕ) exp(st)Z(χ) , v(ϕ)t − v̄(ϕ)(χ) )∣∣∣∣ = ∣∣∣∣cov(h(v̄(χ); y)(ϕ) exp(st)Z(χ) , v̄(ϕ)(χ) )∣∣∣∣ ≤ [ var ( h(v̄(χ); y)(ϕ) exp(st)
Z(χ)
)]1/2 [ var ( v̄(ϕ)(χ) )]1/2 < mσ2L exp(st)
(Z(χ))3 √ d
(19)
By the triangle inequality,∣∣∣∣∣〈vt − v̄(χ)〉TΨt 〈 h(v̄(χ); y) exp(st) Z(χ) 〉 Ψt − 〈 (vt − v̄(χ))Th(v̄(χ); y) exp(st) Z(χ) 〉 Ψt ∣∣∣∣∣ = ∣∣∣∣∣ d∑
ϕ=1
[〈 v
(ϕ) t − v̄(ϕ)(χ) 〉 Ψt 〈 h(v̄(χ); y)(ϕ) exp(st) Z(χ) 〉 Ψt − 〈 (v (ϕ) t − v̄(ϕ)(χ))h(v̄(χ); y)(ϕ) exp(st) Z(χ) 〉 Ψt ]∣∣∣∣∣ ≤
d∑ ϕ=1 ∣∣∣∣∣〈v(ϕ)t − v̄(ϕ)(χ)〉Ψt 〈 h(v̄(χ); y)(ϕ) exp(st) Z(χ) 〉 Ψt − 〈 (v (ϕ) t − v̄(ϕ)(χ))h(v̄(χ); y)(ϕ) exp(st) Z(χ) 〉 Ψt ∣∣∣∣∣ =
d∑ ϕ=1 ∣∣∣∣cov(h(v̄(χ); y)(ϕ) exp(st)Z(χ) , v(ϕ)t − v̄(ϕ)(χ) )∣∣∣∣
< mσ2L exp(st)
√ d
(Z(χ))3 by Eq (19)
= m Z(χ) · exp(st) Z(χ)
· L √ dσ2
Z(χ)
= ( 1− exp(st)
Z(χ) ) · exp(st) Z(χ) · L √ dσ2 Z(χ)
≤L √ dσ2
4Z(χ) ≤L √ dσ2
4 . The last line is due to Z(χ) > ∑ w∈χ\{t} exp(sw) = m ≥ 1. | 1. How does the paper contribute to understanding attention components during training?
2. What are the strengths and weaknesses of the proposed theoretical analysis?
3. Are there any concerns regarding the assumptions made in the proof and synthetic empirical results?
4. How does the paper frame the effects of competition between possible topic words?
5. Can the mutual amplification effect have implications for using attention weights as a saliency proxy?
6. Can the authors provide clarification on the reference to capacity in Lemma 1?
7. How does the variance of word embeddings change throughout training?
8. Can the authors elaborate on what it means for a word to be "paired" with another word in natural language experiments?
9. Is there a possibility that multilayer attention networks like BERT could generalize the dynamics discussed in the paper?
10. Do the natural language experiments require more quantitative evidence to support their claims?
11. Would including multiple runs with different initializations strengthen the synthetic results?
12. Can the authors explain how they interpreted gradients amplifying the embedding and score as directed towards v and k respectively in this simplified model?
13. Were there any notable differences in following the analytic predictions on a synthetic dataset versus testing on a natural language data set? | Review | Review
Summary:
This paper aims to prove and illustrate that attention components are defined during training by gradients that mutually amplify the embedding and score associated with crucial features. In particular, a word embedding with a high magnitude increases the gradient following the attention score for the same word, while a high attention score increases the gradient directed at the word's embedding. In addition to a proof that treats behavior during training as a dynamical system under a large suite of assumptions, they test the analytic predictions on a synthetic dataset following the same suite of assumptions. They then test on a natural language data set and discuss where it diverges from the analytic and synthetic findings, concluding that the difference is a result of competition between different words associated with a label.
Pros:
We currently lack any substantial theory about attention modules and why they work. Although their model is simplistic, it could provide essential groundwork for analytic understanding of these popular systems. I would even consider it fairly realistic relative to a lot of the assumptions required for theoretical results in training dynamics research. Currently theory of attention is grounded in infinite-width networks, an assumption this paper does not make.
The synthetic results appear to substantiate this theoretical result.
They find an interesting result that, in more realistic settings, the learning dynamics follow particular patterns on the words that are paired together with more versus less predictive words. The framing of these effects in terms of competition between possible topic words is clearly inspired by considering which assumptions behind their proof have failed, which is evidence that the thinking behind their proof is potentially valuable.
Cons:
It's not clear how dynamics like these would generalize to multilayer attention networks like BERT.
The assumptions behind the theoretical and synthetic empirical results are simplistic: The existence of a large vocabulary of "non-topic" words required to keep the variance of embedding negligible in out of focus words, the presence of only one topic word. There is also the very common assumption of Lipschltz continuity.
The natural language experiments make a specific claim about the different dynamics for competing words of different topic purity, but only presents an example of two words as evidence. I want to see quantitative evidence of the pattern.
The synthetic results would be strengthened by including multiple runs with different initializations so they can include confidence intervals.
Questions:
Does this mutual amplification effect have any ramifications for the debate over whether attention weights can be used as a proxy for saliency?
In Lemma 1, there is a reference to the attention block's capacity which is difficult to decipher. What do you mean here by capacity?
The assumption that word embeddings are sampled from a distribution with small variance seems likely to apply early in training, but not later. Have you checked the actual variance that would be associated with word embeddings late in training?
What is actually meant by a word being "paired" with another word in the natural language experiments?
Did I misunderstand something in interpreting gradients amplifying the embedding and score as directed towards v and k respectively in this simplified model?
Minor:
Notation is difficult to follow at times because several unrelated concepts use almost the same symbols:
s
i
indicates score, but
S
i
indicates a sentence;
τ
indicates learning rate, but
T
(which looks identical as a subscript) indicates a set of sentences.
In discussing early alignment of attention to syntax, Clark et al. 2019 was concurrent with https://www.aclweb.org/anthology/P19-1580/ |
ICLR | Title
Understanding Zero-shot Adversarial Robustness for Large-Scale Models
Abstract
Pretrained large-scale vision-language models like CLIP have exhibited strong generalization over unseen tasks. Yet imperceptible adversarial perturbations can significantly reduce CLIP’s performance on new tasks. In this work, we identify and explore the problem of adapting large-scale models for zero-shot adversarial robustness. We first identify two key factors during model adaption—training losses and adaptation methods—that affect the model’s zero-shot adversarial robustness. We then propose a text-guided contrastive adversarial training loss, which aligns the text embeddings and the adversarial visual features with contrastive learning on a small set of training data. We apply this training loss to two adaption methods, model finetuning and visual prompt tuning. We find that visual prompt tuning is more effective in the absence of texts, while finetuning wins in the existence of text guidance. Overall, our approach significantly improves the zero-shot adversarial robustness over CLIP, seeing an average improvement of 31 points over ImageNet and 15 zero-shot datasets. Our code and model is available at github.com/cvlab-columbia/ZSRobust4FoundationModel.
1 INTRODUCTION
Large-scale models trained on vision and language data—also known as foundation models— have emerged as a universal backbone for tackling many recognition problems in computer vision (Jia et al., 2021; Radford et al., 2021), graphics (Ramesh et al., 2022) and robotics (Ahn et al., 2022). One of the key advantages of foundation models is zero-shot generalization, where the models use just a single textual description to recognize new visual categories with high accuracy. Since those large-scale models are powerful, they will continue to be used in many critical applications, where it is important to make them reliable. However, robustness under adversarial examples remains a challenge, where an imperceptible pattern can be combined with the image to cause recognition failures (Croce & Hein, 2020; Carlini & Wagner, 2017; Dong et al., 2018; Szegedy et al., 2013; Moosavi-Dezfooli et al., 2016), where attack on foundation models can consequently corrupt the downstream applications.
Due to the importance of this problem, there is a large literature that investigates adversarial robustness for neural networks. The most common approach for adversarial defense is to learn the model through adversarial training (Madry et al., 2018; Mao et al., 2019; Szegedy et al., 2013; Pang et al., 2020; Rice et al., 2020; Uesato et al., 2019), which involves augmenting the training set with mined adversarial examples that fool the image classifier. Adversarial training has been validated to improve robustness on the task that the mined examples come from, but it often comes at a cost of generalization (Stutz et al., 2019; Su et al., 2018; Pedraza et al., 2021). However, our world is vast and naturally open, and only evaluating adversarial robustness on the learned tasks is limited. Can we achieve zero-shot transferability for adversarial robustness, even if the model has never been trained on the unknown tasks?
In this paper, we study this important yet under-explored problem, zero-shot adversarial robustness of large-scale vision-language models. We start our investigation with the state-of-the-art CLIP model (Radford et al., 2021), which has been shown to be effective in zero-shot recognition tasks. We find that simply adding an imperceptible vector to the image (≤ 1/255) can subvert
∗Equal contribution
CLIP’s prediction (see Figure 1a). If we follow the standard adversarial training defense paradigm (Madry et al., 2018; Rice et al., 2020) to finetune CLIP on the ImageNet (Deng et al., 2009b) training set, we observe that the adapted CLIP has improved adversarial robustness on the ImageNet validation set, but comes at the cost of significantly reduced accuracy on unseen datasets and classes (Figure 1b). Standard adversarial training backfires on CLIP as it fails to retain the model’s zero-shot generalization ability.
Adaptation methods and training objectives are the two major factors for adapting a large-scale model. First, besides finetuning the whole model, we seek an alternative adaptation method—visual prompt tuning—which adapts the inputs instead of the parameters of the model. Visual prompt tuning (VPT) is an emerging light-weight adaptation method (Bar et al., 2022; Bahng et al., 2022) that learns a visual prompt which is added to the input image, where we use visual prompt to instruct the model to be robust against adversaries. Second, we find that the standard adversarial training objective ignores the visual-language alignment in CLIP’s pretrained representation space, causing the model to lose zero-shot capability. We then propose a text-guided contrastive adversarial training (TeCoA) loss, dubbed as Tekoa (tee·kow), which maximizes the similarity of the adversarial visual features and the correct text embeddings with contrastive learning. Since the adapted visual features continue to align well with the text features, the model adapted with TeCoA can maximally retain the original zero-shot generalization of CLIP while enjoying improved adversarial robustness.
We conduct an extensive evaluation on 15 zero-shot image datasets, offering a holistic study of the zero-shot adversarial robustness problem. This is especially important given that large-scale vision models are emerging as infrastructure and are deploying to critical applications. We find that the lightweight VPT is noticeably more effective than model finetuning when textual information is unavailable. When texts are used during adaptation, both VPT and finetuning using our TeCoA loss have drastically improved zero-shot adversarial robustness compared to baselines. Finetuning has higher gains than VPT as more parameters are tuned. Our best performing model with the TeCoA loss can improve adversarial robustness over CLIP by an average of 31% across the datasets. Our method also works on unlabeled images, allowing for better robustness with a large amount of unlabeled data. Our work establish a new and important benchmarket, zero-shot adversarial robustness, for future work to evaluate on. We release all models and code.
2 RELATED WORK
Zero-Shot Generalization aims to classify novel classes and tasks that are unseen during training (Palatucci et al., 2009; Lampert et al., 2009; Radford et al., 2021). Existing zero-shot methods often project visual features into semantic feature space (Frome et al., 2013; Akata et al., 2015; Romera-Paredes & Torr, 2015; Xie et al., 2019; Yu et al., 2018; Liu et al., 2019), or use generative methods to generate fake visual features of unseen classes from their semantic descriptions to train classifiers (Xian et al., 2018; Ni et al., 2019; Huang et al., 2019; Schonfeld et al., 2019; Verma et al., 2019; Liu et al., 2020). Recently, large-scale pretrained vision-language models (Radford et al., 2021; Jia et al., 2021) have shown outstanding zero-shot generalization ability on unseen tasks via text prompt engineering. Their adversarial robustness and its transferability, however, has not been studied in the zero-shot setting.
Adversarial Robustness. Adversarial attacks for image recognition find an additive vector on the input to maximize the cross-entropy loss, which is calculated from the model prediction and the ground truth one-hot label (Szegedy et al. (2013); Athalye et al. (2018); Carlini & Wagner (2017);
Kurakin et al. (2017); Papernot et al. (2015); Moosavi-Dezfooli et al. (2016)). Adversarial training (Madry et al., 2018) and its variants (Zhang et al., 2019; Rice et al., 2020), which train the model on generated adversarial examples, are effective in improving adversarial robustness on the task that they have been trained on. However, it is unclear whether this approach can improve robustness in the zero-shot scenario. To the best of our knowledge, Yucel et al. (2020) is so far the only work that studies the adversarial robustness of zero-shot learning models. Their setting is limited because it relies on predicting robust attributes, which may not be easily available for many tasks.
Transferability of Robustness. Mao et al. (2020) shows that adversarial robustness is transferable across tasks when multiple tasks are trained together. Salman et al. (2020) shows that adversarially robust models transfer better than their standard-trained counterparts when they are finetuned on other tasks. Chan et al. (2020) finds that matching a robust model’s gradient can transfer robustness to a new model. Vaishnavi et al. (2022) proposes a low-cost method to transfer robustness to a new model on the same task with different architecture. However, the transferability of adversarial robustness to zero-shot tasks has not been investigated.
Contrastive Learning (Oord et al., 2018) has been used to train large-scale image-language models (Jia et al., 2021; Radford et al., 2021). Kim et al. (2020); Jiang et al. (2020) propose instancewise adversarial perturbation and use a contrastive loss to align the features of clean examples and generated adversarial examples. Our method is the first one to introduce a cross-modal image-text contrastive loss in adversarial contrastive learning.
Adapting Pretrained Models. Linear probing and finetuning are the two major ways to adapt deep pretrained models. Recently, a more lightweight adaptation method, prompt tuning, has been proposed (Zhou et al., 2022c;b;a). Shu et al. (2022) shows that test-time optimization for text prompting helps generalization. Bar et al. (2022) combines target task and image inpainting to achieve zeroshot task inference. Jia et al. (2022); Sandler et al. (2022) used visual prompting to replace the finetuning procedure for large-scale models. Bahng et al. (2022) optimizes a visual prompt to increase the performance on the same task that the model finetunes the visual prompt on. Mao et al. (2021); Lawhon et al. (2022); Mao et al. (2022) find input prompt with self-supervised objective for robustness. Prompt tuning for continuous learning has also been proposed (Conder et al., 2022). Liu et al. (2022) proposes an amortized approach to use fewer iterations to adapt models. Wang et al. (2019) showed that it is useful in improving the performance for healthcare, and Liu et al. showed it can be used to adapt generative models for solving under constrained problems. However, adapting large-scale models for transferable zero-shot robustness, using methods such as finetuning or visual prompting, has not been investigated.
3 MODEL ADAPTATION FOR ZERO-SHOT ADVERSARIAL ROBUSTNESS
We first give background in Section 3.1 on adversarial attacks, adversarial training, and the problem setup. In Section 3.2, we discuss adaptation methods for adapting large-scale models for zero-shot adversarial robustness. Finally, Section 3.3, we discuss the effect of different training losses to motivate and then introduce our proposed text-guided contrastive adversarial training (TeCoA) loss.
3.1 BACKGROUND AND PROBLEM SETUP
Let Fθ(·) be a deep model for image recognition parameterized by θ. Given an input image x, the model produces a representation ŷ = Fθ(x). For image classification, a standard model learns to minimize the cross-entropy loss, L(x,y) = H(Fθ(x),y), where the label y is often represented by a one-hot vector.
Adversarial Attacks. An adversary (i.e., attacker) typically optimizes for an additive transformation δ to the image, xa = x+ δ, which can fool the model Fθ to make incorrect predictions:
xa = argmax xa
L(xa,y), s.t. ||xa − x||q ≤ ϵ. (1)
Here, the magnitude of the added pattern is bounded by a q-norm ball of radius ϵ, making the attack perceptually invisible. The role of a defender is to find ways to correct the influence of the attacks and ensure that the model is robust to the adversarial perturbations.
Zero-Shot Adversarial Robustness. Large-scale vision-language models generalize well to new tasks and datasets at test time. However, the zero-shot transferability of adversarial robustness of these models is less explored. In our zero-shot adversarial robustness setup, we study the worst case: we assume that the attacker has the significant advantage of unrestricted access to the ground truth of new tasks at test time, while the defender has no access. While the attacker can directly optimize for an attacked image that fools the model, the defender needs to be robust on all kinds of unseen data and tasks. Compared to the commonly-used robustness setting, which only evaluates robustness on the trained task, our setup is more challenging.
Adversarial Training is the common strategy to improve a model’s adversarial robustness. By retraining the model on mined adversarial examples, the model is incentivized to learn features invariant under adversarial transformations. The defender often optimizes for the objective
θ = argmin θ
L(Fθ(xa),y), (2)
so that the model Fθ still makes correct predictions on the generated adversarial examples. As shown in Figure 1, the vanilla adversarial training approach is effective on seen tasks but fails when the model is evaluated on attacked data from unseen tasks.
3.2 ADAPTING THE LARGE-SCALE MODELS
One of the key advantages of large-scale pretrained models is that the features of these models are generalizable to various tasks without full retraining. Thus, we would like to make a lightweight adjustment to the models with a small set of training data of attacked images on known tasks, and maximally retain the zero-shot generalization ability of these large models while improving their adversarial robustness on new tasks. We adopt CLIP, one of the best performing vision-language models for zero-shot recognition, as our base model, and investigate a few adaptation strategies as shown in Figure 2 to improve the zero-shot adversarial robustness of CLIP.
Finetuning (FT). A typical way to adapt a pretrained model, finetuning, is to update the parameters of the model either entirely or partially. While this approach improves model robustness on the target distribution, Radford et al. (2021) and Pham et al. (2021) show that this improvement often comes at a cost of generalization; directly modifying model parameters may lead to a higher tendency towards overfitting.
Visual Prompting (VP). Recently, visual prompt tuning has emerged as a lightweight approach for adapting pretrained large-scale models. Originating from the natural language processing community, the key idea of prompt tuning is to identify prompts to decorate the inputs that effectively query the pretrained models. The computer vision community recently adopted this idea and is starting to explore approaches that can modify input images in pixel space to better adapt large-scale vision
models for downstream tasks. For transformer-based models, visual prompting methods either learn a token that is appended to the input token sequences (Bar et al., 2022) or learn a direct modification to the input images (Bahng et al., 2022) for adaptation, as shown by (d,e) in Figure 2.
In our study, we conduct an extensive analysis over both adaptation methods to better understand adapting large-scale models for zero-shot adversarial robustness.
3.3 TEXT-GUIDED ADVERSARIAL CONTRASTIVE ADVERSARIAL TRAINING
Through our initial experiments, we conjecture that the training objective might play a key role in improving the zero-shot adversarial robustness of the models. Radford et al. (2021) indicates that the zero-shot generalization ability of large-scale vision-language models may come from their language supervision. For example, CLIP learns a joint visual-and-text feature space, which helps zero-shot generalization at test time. If we simply finetune the visual encoder with one-hot labels, it may break the joint feature space and harm this zero-shot generalization ability. These observations motivate us to consider using the text information when generating the adversarial examples and also in the training objective during model adaptation.
We now introduce the Text-guided Contrastive Adversarial (TeCoA) training loss, to effectively incorporate text information. In contrast to prior contrastive learning which does image-to-image contrastive learning (Jiang et al., 2020; Kim et al., 2020), we consider a cross-modal text-to-image contrastive learning paradigm. We first generate a set of adversarial examples conditioned on the text inputs which are targeted to fool the model about the correspondence between image features and text embeddings. TeCoA then tries to minimize the feature distance between the attacked image and the correct corresponding text inputs contrastively (Figure 3). We provide additional discussion of this text-image contrastive objective in Appendix A.5.
Concretely, given a set of image and text pairs {(xi, ti)}, we use the pretrained CLIP model to encode each pair with an image encoder Fθ and a text encoder T . We then have the following image-to-text contrastive loss function,
Ls(x, t,y) = −Ei,j [ yij log exp(cos(z (I) i , z (T ) j )/τ)∑
k exp(cos(z (I) i , z (T ) k )/τ)
] , (3)
where the z(I)i = Fθ(xi) are the features of the input image, and z (T ) i = T (ti) are the features from the input text. We use yij to indicate which image-text pairs are positive and which are negative; this indicator satisfies yij = 1 if and only if the examples i = j, and 0 otherwise. τ is a scalar hyper-parameter, and cos denotes the cosine similarity function.
Constructing Adversarial Examples for Image-Text Pair. Instead of maximizing the standard cross-entropy loss, we maximize the image-text contrastive loss given a batch of natural images x and text t:
xa = argmax xa
Ls(xa, t,y), s.t. ||xa − x||q < ϵ. (4)
Here, for image xi in the batch, the associated text ti can be its natural label text or constructed via the standard prompts for the zero-shot tasks (e.g., “a photo of a [LABEL]”). The indicator yij = 1 when image xi has category j as its ground-truth class, and 0 otherwise. In practice, we find this objective is effective at generating adversarial examples for zero-shot image recognition tasks.
Text-Guided Contrastive Adversarial Training. Once we have the adversarial examples, we optimize the parameters θ of the vision encoder Fθ to minimize the aforementioned objective (Equa-
tion 8) on the generated adversarial examples,
θ = argmin θ
Ls(xa, t,y). (5)
Our algorithm iteratively alternates between generating adversarial examples and updating the model via Equation 5. Since TeCoA uses additional information from the text embeddings to correct the visual features corrupted by the adversarial attacks, it helps the model to retain zero-shot transferability regarding adversarial robustness.
Contrastive Adversarial Training without Text. To validate whether the gain of zero-shot robustness comes from language supervision or is due to the formulation of the contrastive loss itself, we consider two variants of contrastive losses which do not utilize languages in our experiments. One is a contrastive adversarial training loss with one-hot labels (CoAdv.), where the label information is encoded as a one-hot vector and is used to contrast with the image features. Another is based on Jiang et al. (2020), where the model is finetuned on adversarial examples to fool the image-toimage contrastive loss, denoted as ImgCoAdv. More details are presented in Section 4.
4 EXPERIMENTS
We start by describing our experiment setups and present the experimental results of 16 datasets in Section 4.1. We observe that CLIP adapted with TeCoA achieves an average of 31 points improvement across the datasets over the original CLIP model. In Section 4.2, we provide extensive analysis over the design choices involved in our approach and identify that the use of language in TeCoA largely improves the model’s zero-shot adversarial robustness.
Datasets. We evaluate the zero-shot adversarial robustness conferred by TeCoA trained on ImageNet (Deng et al., 2009a) and report the performance of the models on the ImageNet test set as well as 15 zero-shot test datasets, covering a diverse range of recognition tasks. Specifically, we include CIFAR10, CIFAR100 (Krizhevsky et al., 2009), STL10 (Coates et al., 2011), Caltech101 (Fei-Fei et al., 2004), and Caltech256 (Griffin et al., 2007) for generic classification; OxfordPets (Parkhi et al., 2012), StanfordCars (Krause et al., 2013), Food101 (Bossard et al., 2014), Flowers102 (Nilsback & Zisserman, 2008), and FGVCAircraft (Maji et al., 2013) for fine-grained classification; SUN397 (Xiao et al., 2010) for scene recognition; and DTD (Cimpoi et al., 2014) for texture recognition. Finally, we include three datasets with domain-specialized tasks, PatchCamelyon (PCAM, lymph node tumor detection) (Veeling et al., 2018), HatefulMemes (hatespeech detection) (Kiela et al., 2020), and EuroSAT (satellite image classification) (Helber et al., 2017).
Baselines. We consider multiple variants to adapt CLIP (Radford et al., 2021). For adaptation methods, we consider visual prompting (VP) (Bahng et al., 2022), linear probing (LP) and finetuning (FT). Since LP involves training a linear readout layer on the target task, it is not zero-shot, but we still evaluate it for reference. For training losses, we consider
(1) vanilla cross-entropy loss (CE); (2) standard adversarial training loss (Adv.) with the cross-entropy loss; (3) contrastive adversarial training loss (CoAdv.); (4) contrastive adversarial training over images (ImgCoAdv.); (5) our text-guided contrastive adversarial training (TeCoA).
In our experiments, we name these variants by adaptation(loss). Detailed formulations and associated training algorithms for each loss can be found in Section A.3.
Implementation Details. We use CLIP-B/32 architecture. We optimize the model using an SGD optimizer with momentum 0.9. We train for 10 epochs. For finetuning, we use a learning rate of 1e − 5. For visual prompt tuning, we use a learning rate of 40 and a batch size of 256. For prompt tuning on the entire ImageNet dataset, we use token-level prompt with size 200, while for subsets of ImageNet (1K, 5K, and 50K images), we use token-level prompt with size 5 and a smaller batch size of 64. Unless specified, during adversarial training, we generate Linf = 1/255 bounded attacks using a 2-step PGD attack (Madry et al., 2018) with step size α = 1/255. We test the robustness of our model using 100 steps of PGD attack, with step size α = 1/255.
4.1 EXPERIMENTAL RESULTS
We first use PGD attack with 100 steps to evaluate the robustness of CLIP models that are adapted on ImageNet. In Table 1, we show the robust accuracies for the models on the Imagenet test set and 15 zero-shot recognition tasks (i.e., the training classes and test classes are non-overlapping). Each row is the robust accuracy for one method, where we compare the proposed TeCoA with VP and FT against 10 baselines and their variants. We bold the best results for each dataset (column).
From Table 1, we can see that most adapted models with adversarial training losses except for ImgCoAdv. have better robust accuracies compared to the standard cross-entropy loss. If using adversarial training, the vanilla FT (Adv.), which finetunes the entire model with adversarial training, achieves the best adversarial robustness on ImageNet (non zero-shot data, same as the training tasks) while the average robust accuracy is 10.62%, which is only slightly better than the original CLIP (6.57%). In the meantime, VP(Adv.) achieves much better results, improving the
Im ag
eN et
CIF AR
10 ST L10
CIF AR
10 0
SU N3
97 Sta nfo rdC ars Foo d1 01 Ox for dP ets Flo we r10 2 DT D Eu roS AT FG VC Ai rcr aft
Ha
tef ulM em es
Pat
chC am ely on Ca lte ch1 01 Ca lte ch2 56
0
20
40
60
80
100
Ro bu
st A
cc ur
ac y
(% )
Im ag
eN et
CIF AR
10 ST L10
CIF AR
10 0
SU N3
97 Sta nfo rdC ars Foo d1 01 Ox for dP ets Flo we r10 2 DT D Eu roS AT FG VC Ai rcr aft
Ha
tef ulM em es
Pat
chC am ely on Ca lte ch1 01 Ca lte ch2 56
0
20
40
60
80
100
Cl ea
n Ac
cu ra
cy (%
)
= 1/255 = 2/255 = 4/255
Figure 5: Zero-shot adversarial robustness under different perturbation bounds (ϵ = 1, 2, 4/255). We vary the perturbation bound for adversarial finetuning with TeCoA. Each adapted model is evaluated under attacks from the same bound seen during training. We show both the robust accuracy (left) and clean accuracy (right). Our defense is still effective on zero-shot tasks when the perturbation gets larger.
Im ag
eN et
CIF AR
10 ST L10
CIF AR
10 0
SU N3
97 Sta nfo rdC ars Foo d1 01 Ox for dP ets Flo we r10 2 DT D Eu roS AT FG VC Ai rcr aft Ha tef ulM em es Pat chC am ely on Ca lte ch1 01 Ca lte ch2 56
0
20
40
60
80
100
Ro bu
st A
cc ur
ac y
(% ) Image Prompt (Added) Token Prompt (Appended)
(a) Visual Prompt Design
Im ag
eN et
CIF AR
10 ST L10
CIF AR
10 0
SU N3
97 Sta nfo rdC ars Foo d1 01 Ox for dP ets Flo we r10 2 DT D Eu roS AT FG VC Ai rcr aft Ha tef ulM em es Pat chC am ely on Ca lte ch1 01 Ca lte ch2 56
0
20
40
60
80
100
Ro bu
st A
cc ur
ac y
(% ) Partial Finetuning Visual Prompting
(b) Partially FT vs. VP
Figure 6: (a, left) We conduct an ablation study of whether to add visual prompt to the input image or to append prompt token to the input sequence. (b, right) We optimize the same amount of parameters in partial finetuning and VP, we find VP is more effective when only a small number of parameters are optimized.
accuracy number from 6.57% to 31.84%. This indicates that visual prompting is more powerful than finetuning when coupled with the standard adversarial training.
Within each set of the variants using the same adaptation method, we compare the effectiveness of different training losses. We notice that both CoAdv. and ImgCoAdv. are much worse than the proposed TeCoA and even the standard adversarial training (Adv.). This indicates that the formulation of contrastive learning may not necessarily help improve the zero-shot adversarial robustness and the use of the language supervision might.
Overall, we find that adapting CLIP with TeCoA using model finetuning presents the best performance across the datasets, improving the accuracy number from 6.57% to 38.18%, roughly 31 points. This might look counter-intuitive as VP significantly outperforms FT under the standard adversarial training without texts. One hypothesis is that with sufficient semantic information at test time, finetuning that directly modifies the model parameters may be more expressive than just modifying the inputs given more model parameters are tuned. In Table 3 in Appendix, we show the clean accuracy of the same models, which gives similar trend as the robust accuracies. We also show results under ϵ = 1/255 and ϵ = 4/255 using AutoAttack Croce & Hein (2020) in Table 4 and Table 5, where our model is still more robust than baselines, up to an average of 36 points. We describe details in Section A.2.
4.2 ANALYSIS
Training Set Size is an important factor when adapting large-scale models. In Figure 4, we show the results of adapting CLIP with TeCoA with 1, 5, 50, and 1000 shot per category on ImageNet. We observe that increasing the amount of training data improves robust accuracy. Moreover, we also find that FT(TeCoA) outperforms the non-adapted CLIP even when the model is adapted with just one shot per class (the blue bars in Figure 4).
Effect of Attack Strength. To validate whether our method works when the adversarial perturbations become larger, we increase the perturbation bound for our TeCoA adversarial training. Figure 5 shows that, while increasing the attack strength decreases the robust accuracy, our model can still transfer robustness to zero-shot tasks.
Visual Prompt Designs. There are two ways to design visual prompts. One is to append additional tokens to image tokens and the other is to add small decoration (i.e., learnable noises) to the in-
put images. From Figure 6a, we can see appending learnable tokens in the input token sequences achieves consistently better performance than just adding the prompt value to the images.
Number of Adapted Parameters. The number of parameters that are adapted during training highly affects model performance. VP is light-weight, as it only modifies the inputs, while FT may either adjust part of the model or the entire model. In Figure 6b, we show the comparison of partially finetuning and visual prompting of CLIP. We can see that with the same amount of parameters adapted, visual prompting is more effective in adapting the large scale model for zeroshot adversarial robustness.
Are Labels Required for Adapting CLIP? Unlabeled data is rich. We investigate if it’s necessary to use the groundtruth labels from images. We generate the pseudo text labels that CLIP retrieves from clean images, where we show details in Section A.3.5. We then run our TeCoA on the pseudo labels in the same way as the labelled data. We show experimental results in Table 2, where we obtain a similar zero-shot robust accuracy even without labels.
Trading off between Robust Accuracy and Clean Accuracy. Similar to typical adversarial training, TeCoA also poses a trade-off between the clean accuracy and the robust accuracy Tsipras et al. (2019). Ideally, we want to be able to dynamically adjust this trade-off depending on the desired level of adversarial robustness. Here we can balance this trade-off by using model weights interpolation (Wortsman et al., 2022). In Figure 7, we can see there is a sweet spot in interpolating the model where we improve both the robustness and clean accuracy, marked by the star in the figure.
5 CONCLUSION
In this paper, we provided a holistic study of the zero-shot adversarial robustness problem of largescale vision-language models. We identified the effects of various adaption methods and training losses when adapting models, and conjectured that existing methods failed to generalize to new tasks due to the lack of the language supervision. We proposed a text-guided contrastive adversarial training (TeCoA) which can be used with model finetuning and visual prompting to drastically improve the zero-shot adversarial robustness of CLIP. Extensive experimental evaluation showed the effectiveness of TeCoA, and the detailed analyses provide useful lessons for adapting large-scale models to improve their zero-shot adversarial robustness, shedding light on this important problem.
6 ACKNOWLEDGEMENT
This research is based on work partially supported by the DARPA SAIL-ON program, the DARPA MCS program, the NSF NRI Award #2132519, a GE/DARPA grant, a CAIT grant, and gifts from JP Morgan, DiDi, and Accenture.
Generative_Dual_Adversarial_Network_for_Generalized_Zero-Shot_
Learning_CVPR_2019_paper.html.
Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In International Conference on Machine Learning, pp. 4904–4916. PMLR, 2021.
Menglin Jia, Luming Tang, Bor-Chun Chen, Claire Cardie, Serge Belongie, Bharath Hariharan, and Ser-Nam Lim. Visual prompt tuning. arXiv preprint arXiv:2203.12119, 2022.
Ziyu Jiang, Tianlong Chen, Ting Chen, and Zhangyang Wang. Robust pre-training by adversarial contrastive learning. Advances in Neural Information Processing Systems, 33:16199–16210, 2020.
Douwe Kiela, Hamed Firooz, Aravind Mohan, Vedanuj Goswami, Amanpreet Singh, Pratik Ringshia, and Davide Testuggine. The hateful memes challenge: Detecting hate speech in multimodal memes. Advances in Neural Information Processing Systems, 33:2611–2624, 2020.
Minseon Kim, Jihoon Tack, and Sung Ju Hwang. Adversarial self-supervised contrastive learning. Advances in Neural Information Processing Systems, 33:2983–2994, 2020.
Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In 4th International IEEE Workshop on 3D Representation and Recognition (3dRR-13), Sydney, Australia, 2013.
Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. Adversarial examples in the physical world. CoRR, abs/1607.02533, 2017.
Christoph H. Lampert, Hannes Nickisch, and Stefan Harmeling. Learning to detect unseen object classes by between-class attribute transfer. In 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2009), 20-25 June 2009, Miami, Florida, USA, pp. 951–958. IEEE Computer Society, 2009. doi: 10.1109/CVPR.2009.5206594. URL https://doi.org/10.1109/CVPR.2009.5206594.
Matthew Lawhon, Chengzhi Mao, and Junfeng Yang. Using multiple self-supervised tasks improves model robustness. arXiv preprint arXiv:2204.03714, 2022.
Bo Liu, Qiulei Dong, and Zhanyi Hu. Zero-shot learning from adversarial feature residual to compact visual feature. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pp. 11547–11554. AAAI Press, 2020. URL https://ojs.aaai.org/index.php/AAAI/article/view/6821.
Ruoshi Liu, Sachit Menon, Chengzhi Mao, Dennis Park, Simon Stent, and Carl Vondrick. Shape analysis by shadow synthesis.
Ruoshi Liu, Chengzhi Mao, Purva Tendulkar, Hao Wang, and Carl Vondrick. Landscape learning for neural network inversion. arXiv e-prints, pp. arXiv–2206, 2022.
Yang Liu, Jishun Guo, Deng Cai, and Xiaofei He. Attribute attention for semantic disambiguation in zero-shot learning. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019, pp. 6697–6706. IEEE, 2019. doi: 10.1109/ICCV.2019.00680. URL https://doi.org/10.1109/ICCV.2019.00680.
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In ICLR, 2018.
S. Maji, J. Kannala, E. Rahtu, M. Blaschko, and A. Vedaldi. Fine-grained visual classification of aircraft. Technical report, 2013.
Chengzhi Mao, Ziyuan Zhong, Junfeng Yang, Carl Vondrick, and Baishakhi Ray. Metric learning for adversarial robustness. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
Chengzhi Mao, Amogh Gupta, Vikram Nitin, Baishakhi Ray, Shuran Song, Junfeng Yang, and Carl Vondrick. Multitask learning strengthens adversarial robustness. In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm (eds.), Computer Vision – ECCV 2020, pp. 158– 174, Cham, 2020. Springer International Publishing.
Chengzhi Mao, Mia Chiquier, Hao Wang, Junfeng Yang, and Carl Vondrick. Adversarial attacks are reversible with natural supervision. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 661–671, 2021.
Chengzhi Mao, Lingyu Zhang, Abhishek Joshi, Junfeng Yang, Hao Wang, and Carl Vondrick. Robust perception through equivariance, 2022. URL https://arxiv.org/abs/2212. 06079.
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks, 2016.
Jian Ni, Shanghang Zhang, and Haiyong Xie. Dual adversarial semantics-consistent network for generalized zero-shot learning. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc, Emily B. Fox, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pp. 6143–6154, 2019. URL https://proceedings.neurips.cc/paper/2019/hash/ c46482dd5d39742f0bfd417b492d0e8e-Abstract.html.
Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. In 2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing, pp. 722–729. IEEE, 2008.
Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018.
Mark Palatucci, Dean Pomerleau, Geoffrey E Hinton, and Tom M Mitchell. Zero-shot learning with semantic output codes. In Y. Bengio, D. Schuurmans, J. Lafferty, C. Williams, and A. Culotta (eds.), Advances in Neural Information Processing Systems, volume 22. Curran Associates, Inc., 2009. URL https://proceedings.neurips.cc/paper/2009/file/ 1543843a4723ed2ab08e18053ae6dc5b-Paper.pdf.
Tianyu Pang, Xiao Yang, Yinpeng Dong, Hang Su, and Jun Zhu. Bag of tricks for adversarial training, 2020.
Nicolas Papernot, Patrick D. McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Ananthram Swami. The limitations of deep learning in adversarial settings. arXiv:1511.07528, 2015.
Omkar M Parkhi, Andrea Vedaldi, Andrew Zisserman, and CV Jawahar. Cats and dogs. In 2012 IEEE conference on computer vision and pattern recognition, pp. 3498–3505. IEEE, 2012.
Anibal Pedraza, Oscar Deniz, and Gloria Bueno. On the relationship between generalization and robustness to adversarial examples. Symmetry, 13(5):817, 2021.
Hieu Pham, Zihang Dai, Golnaz Ghiasi, Hanxiao Liu, Adams Wei Yu, Minh-Thang Luong, Mingxing Tan, and Quoc V Le. Combined scaling for zero-shot transfer learning. arXiv preprint arXiv:2111.10050, 2021.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pp. 8748–8763. PMLR, 2021.
Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical textconditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 2022.
Leslie Rice, Eric Wong, and J. Zico Kolter. Overfitting in adversarially robust deep learning, 2020.
Bernardino Romera-Paredes and Philip H. S. Torr. An embarrassingly simple approach to zeroshot learning. In Francis R. Bach and David M. Blei (eds.), Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, volume 37 of JMLR Workshop and Conference Proceedings, pp. 2152–2161. JMLR.org, 2015. URL http://proceedings.mlr.press/v37/romera-paredes15.html.
Hadi Salman, Andrew Ilyas, Logan Engstrom, Ashish Kapoor, and Aleksander Madry. Do adversarially robust imagenet models transfer better? Advances in Neural Information Processing Systems, 33:3533–3545, 2020.
Mark Sandler, Andrey Zhmoginov, Max Vladymyrov, and Andrew Jackson. Fine-tuning image transformers using learnable memory. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12155–12164, 2022.
Edgar Schonfeld, Sayna Ebrahimi, Samarth Sinha, Trevor Darrell, and Zeynep Akata. Generalized zero-and few-shot learning via aligned variational autoencoders. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8247–8255, 2019.
Manli Shu, Weili Nie, De-An Huang, Zhiding Yu, Tom Goldstein, Anima Anandkumar, and Chaowei Xiao. Test-time prompt tuning for zero-shot generalization in vision-language models, 2022. URL https://arxiv.org/abs/2209.07511.
David Stutz, Matthias Hein, and Bernt Schiele. Disentangling adversarial robustness and generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
Dong Su, Huan Zhang, Hongge Chen, Jinfeng Yi, Pin-Yu Chen, and Yupeng Gao. Is robustness the cost of accuracy?–a comprehensive study on the robustness of 18 deep image classification models. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 631–648, 2018.
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv:1312.6199, 2013.
Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. Robustness may be at odds with accuracy, 2019.
Jonathan Uesato, Jean-Baptiste Alayrac, Po-Sen Huang, Robert Stanforth, Alhussein Fawzi, and Pushmeet Kohli. Are labels required for improving adversarial robustness? CoRR, 2019.
Pratik Vaishnavi, Kevin Eykholt, and Amir Rahmati. Transferring adversarial robustness through robust representation matching. arXiv preprint arXiv:2202.09994, 2022.
Bastiaan S Veeling, Jasper Linmans, Jim Winkens, Taco Cohen, and Max Welling. Rotation equivariant cnns for digital pathology. In International Conference on Medical image computing and computer-assisted intervention, pp. 210–218. Springer, 2018.
Vinay Kumar Verma, Dhanajit Brahma, and Piyush Rai. A meta-learning framework for generalized zero-shot learning. CoRR, abs/1909.04344, 2019. URL http://arxiv.org/abs/1909. 04344.
Hao Wang, Chengzhi Mao, Hao He, Mingmin Zhao, Tommi S Jaakkola, and Dina Katabi. Bidirectional inference networks: A class of deep bayesian networks for health profiling. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp. 766–773, 2019.
Mitchell Wortsman, Gabriel Ilharco, Jong Wook Kim, Mike Li, Simon Kornblith, Rebecca Roelofs, Raphael Gontijo Lopes, Hannaneh Hajishirzi, Ali Farhadi, Hongseok Namkoong, et al. Robust fine-tuning of zero-shot models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7959–7971, 2022.
Yongqin Xian, Tobias Lorenz, Bernt Schiele, and Zeynep Akata. Feature generating networks for zero-shot learning. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pp. 5542– 5551. Computer Vision Foundation / IEEE Computer Society, 2018. doi: 10.1109/CVPR. 2018.00581. URL http://openaccess.thecvf.com/content_cvpr_2018/html/ Xian_Feature_Generating_Networks_CVPR_2018_paper.html.
Jianxiong Xiao, James Hays, Krista A Ehinger, Aude Oliva, and Antonio Torralba. Sun database: Large-scale scene recognition from abbey to zoo. In 2010 IEEE computer society conference on computer vision and pattern recognition, pp. 3485–3492. IEEE, 2010.
Guo-Sen Xie, Li Liu, Xiaobo Jin, Fan Zhu, Zheng Zhang, Jie Qin, Yazhou Yao, and Ling Shao. Attentive region embedding network for zero-shot learning. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pp. 9384–9393. Computer Vision Foundation / IEEE, 2019. doi: 10.1109/CVPR.2019.00961. URL http://openaccess.thecvf.com/content_CVPR_2019/html/Xie_ Attentive_Region_Embedding_Network_for_Zero-Shot_Learning_CVPR_ 2019_paper.html.
Yunlong Yu, Zhong Ji, Yanwei Fu, Jichang Guo, Yanwei Pang, and Zhongfei (Mark) Zhang. Stacked semantics-guided attention model for fine-grained zero-shot learning. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pp. 5998–6007, 2018. URL https://proceedings.neurips.cc/paper/2018/hash/ 9087b0efc7c7acd1ef7e153678809c77-Abstract.html.
Mehmet Kerim Yucel, Ramazan Gokberk Cinbis, and Pinar Duygulu. A deep dive into adversarial robustness in zero-shot learning. In European Conference on Computer Vision, pp. 3–21. Springer, 2020.
Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P. Xing, Laurent El Ghaoui, and Michael I. Jordan. Theoretically principled trade-off between robustness and accuracy. arXiv abs/1901.08573, 2019.
Chunting Zhou, Junxian He, Xuezhe Ma, Taylor Berg-Kirkpatrick, and Graham Neubig. Prompt consistency for zero-shot task generalization. arXiv preprint arXiv:2205.00049, 2022a.
Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Conditional prompt learning for vision-language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16816–16825, 2022b.
Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Learning to prompt for visionlanguage models. International Journal of Computer Vision, 130(9):2337–2348, 2022c.
A APPENDIX
A.1 EXPERIMENTS
A.1.1 ZERO-SHOT CLEAN ACCURACY OF OUR ADAPTED MODEL
We show the results for accuracy on clean images in Table 3.
A.2 AUTOATTACK EXPERIMENT
We also consider a stronger attack AutoAttack Croce & Hein (2020) in our evaluation. Since our method uses adversarial training and does not rely on the obfuscated gradient, we use two APGD variants, APGD-CE and APGD-DLR, in AutoAttack to evaluate. We show robust accuracy under perturbation bound ϵ = 1/255 in Table 4 and robust accuracy under perturbation bound ϵ = 4/255 in Table 5. For both perturbation bounds, our method achieves higher robust accuracy than vanilla CLIP by up to 36 points on average.
Notably, on ϵ = 1/255, even evaluated under the stronger AutoAttack, the robustness accuracy of FT (TeCoA) is 37.02, which is still higher than all other baselines methods from Table 1 in robust accuracy, even though those baselines are evaluated under a weaker PGD100 attack. A larger perturbation bound ϵ = 4/255 makes the attack stronger, where our method still improves robustness by an average of 9 points. In addition, while the AutoAttack significantly reduces the robust accuracy of CLIP from 6.57 to 0.53, it only slightly decreases our TeCoA’s robust accuracy: 2.1 points for visual prompt tuning and 1.16 for finetuning model (see Table 1 and Table 4).
One reason for AutoAttack to be so effective in attacking vanilla CLIP than PGD100 is because it uses a fractional attack vector, which is not rounded by 1/255 during the inference. Images are often encoded via integer from 0 to 255, which allows only attack at the integer level. In the main paper, we use PGD attacks with step size 1 (if the image ranges from 0 to 1, then step size is proportionally 1/255) for 100 steps. Since there is no fractional attack value, the attack space is constrained and less effective. This is because the standard image inputs have value resolutions that should be larger than 0.5, and any values smaller than this would be rounded when encoding the images. If we ignore the fact that images will be encoded in integers between 0 to 255, then we can have stronger attacks by exploring the fraction values. Since the AutoAttack automatically reduces the attack step size when loss oscillates, it explores the fraction space and is more effective in the attack.
A.3 TRAINING LOSSES AND ALGORITHMS
We give formulations for the different training algorithms considered in our experiments. Throughout this section, let Fθ denote an image encoder parameterized by θ, and let T denote a frozen text encoder. Let D denote a dataset containing pairs (x, y) of images and their respective one-hot labels.
A.3.1 STANDARD ADVERSARIAL TRAINING WITH CROSS-ENTROPY LOSS (ADV.)
The standard adversarial training paradigm. We initialize a learnable linear layer Cϕ, and append it to Fθ. The classification loss is L(Cϕ(Fθ(xa)),y) defined in cross-entropy loss. We first train Cϕ on standard images. Then, given a natural image x with one-hot label y, we generate an attacked image xa by maximizing the loss L(Cϕ(Fθ(xa)),y). We then update θ to minimize the loss L(Cϕ(Fθ(xa)),y). We describe our algorithm in Algorithm 1.
Algorithm 1 Standard Adversarial Training (Adv.) Input: Dataset D, learnable parameter θ, model F , parameter of projector Cϕ
for all iter ∈ preset number of training epochs do for all x,y ∈ minibatch do
xa = argmaxxa L(Cϕ(Fθ(x a)),y) ▷ Generating adversarial attacks
θ = θ −∇θL(Cϕ(Fθ(xa)),y) ▷ Training on generated adversarial examples end for
end for
A.3.2 CONTRASTIVE ADVERSARIAL TRAINING LOSS (COADV.)
We study how much the contrastive learning objective contributes to the zero-shot robustness gain. Instead of using one-hot label y and cross-entropy loss in our objective, we create a dictionary of embeddings E by random initialization, where each embedding ei denotes the code representation for the category yi. We optimize the following contrastive learning loss:
Ls(x, E ,y) = −Ei,j [ yij log exp(cos(z (I) i , ej)/τ)∑
k exp(cos(z (I) i , ej)/τ)
] , (6)
where the z(I)i = Fθ(xi) are the features of the input image, and ej are the code representation from the dictionary. We use yij to indicate which image-code pairs are positive and which are negative; this indicator satisfies yij = 1 if and only if the examples i = j, and 0 otherwise. τ is a scalar hyper-parameter, and cos denotes the cosine similarity function. We describe our algorithm in Algorithm 2.
Algorithm 2 Contrastive Adversarial Training Loss (CoAdv.) Input: Dataset D, learnable parameter θ, model F
for all iter ∈ preset number of training epochs do for all x,y ∈ minibatch do
xa = argmaxxa Ls(x, E ,y) ▷ Generating adversarial attacks for contrastive loss θ = θ −∇θLs(x, E ,y) ▷ Contrastive learning on generated adversarial examples
end for end for
A.3.3 CONTRASTIVE ADVERSARIAL TRAINING OVER IMAGES (IMGCOADV.)
Prior work Jiang et al. (2020) uses image-only contrastive adversarial learning to obtain robustness. We adapt this method as a baseline to study whether using the knowledge from only images — not language — can achieve zero-shot robustness. For each image xi, we create a transformationed xj , and form the image pair (xi, xj).
We use the same visual encoder to embed the images xi and xj to obtain the features zi and zj . We then construct the following contrastive learning loss:
Ls(xi,xj ,y) = −Ei,j [ yij log
exp(cos(zi, zj)/τ)∑ k exp(cos(zi, zj)/τ)
] , (7)
where the zi = Fθ(xi) are the features of the input image. We use yij to indicate which image pairs are positive and which are negative; this indicator satisfies yij = 1 if and only if the images xi and xj are augmented from the same instance, and 0 otherwise. τ is a scalar hyper-parameter, and cos denotes the cosine similarity function.
Let zai = Fθ(x a i ), where x a i denotes the generated adversarial examples. Then we can obtain the adversarial examples via:
xai = argmax xai Ls(xi,xj ,y) = argmax xai
−Ei,j [ yij log
exp(cos(zai , zj)/τ)∑ k exp(cos(z a i , zj)/τ)
] , (8)
Once we generate the adversarial images, we conduct contrastive learning on adversarial images and the paired clean images using Equation 7.
We introduce our algorithm in Algorithm 3.
A.3.4 TEXT-GUIDED CONTRASTIVE ADVERSARIAL TRAINING (TECOA)
We describe the TeCoA training algorithm in Algorithm 4. We denote the learnable parameters to be θ. For the visual prompt tuning, θ is only the prompt vector. For the finetuning method, θ is the parameter of the whole model.
Algorithm 3 Contrastive Adversarial Training over Images (ImgCoAdv.) Input: Dataset D, learnable parameter θ, model F
for all iter ∈ preset number of training epochs do for all x ∈ minibatch do
xai = argmaxxai Ls(x a i ,xj ,y) ▷ Generating adversarial attacks for contrastive loss
θ = θ −∇θLs(xai ,xj ,y) ▷ Contrastive learning on generated adversarial examples end for
end for
Algorithm 4 TeCoA Training Input: Dataset D, learnable parameter θ, model F , text t
for all iter ∈ preset number of training epochs do for all x,y ∈ minibatch do
xa = argmaxx Ls(x, t,y) ▷ Generating adversarial attacks for contrastive loss θ = θ −∇θLs(xa, t,y) ▷ Contrastive learning on generated adversarial examples
end for end for
A.3.5 TECOA LEARNING ON UNLABELED DATA
Given an unlabeled image, we first provide a list of text using the possible category names:
A photo of a {Category Name}.
Since the unlabeled images are not attacked, CLIP can retrieve the nearest text embedding from the image embedding and use the text as the pseudo label for the image. We then conduct the TeCoA training on the images and their pseudo text label. We describe the algorithm below in Algorithm 5.
Algorithm 5 TeCoA Training on Unlabeled Data Input: Dataset D without label, learnable parameter θ, model F , text t.
for all iter ∈ preset number of training epochs do for all x ∈ minibatch B = {x1, . . . , xm} do
y = argminy Ls(x, t,y) ▷ Finding pseudo label for the clean images using CLIP xa = argmaxx Ls(x, t,y) ▷ Generating adversarial attacks for contrastive loss θ = θ −∇θLs(xa, t,y) ▷ Contrastive learning on generated adversarial examples
end for end for
A.4 DISCUSSION FOR RESULTS
In Table 1, our TeCoA performs better than existing methods except for LP(CE) and VPT(Adv.) on HateMemes and PCAM datasets. This is because the HateMemes dataset is a binary classification task for detecting hateful speech, which is a very different domain from the ImageNet classification. Since both LP(CE) and VPT (adv) only adapt a small number of parameters on the ImageNet set, the resulting model may overfit less to the Image recognition task, and just perform random guessing. Note that the 54% accuracy is close to random guessing 50%. In addition, PCAM is a binary classification for lymph nodes (medical image, https://github.com/basveeling/pcam), which is also a very different domain from ImageNet. Similar to HateMemes, adapting fewer parameters makes the model learn less and overfit less, where the 52.5% accuracy is close to random guessing 50%. Thus, both datasets remain a big challenge for all existing zero-shot robust classification tasks.
A.5 DISCUSSION FOR TECOA LOSS
In the main paper, we interpret our loss through the image-text contrastive objective, which first conducts a matrix multiplication between the image embedding and language embedding, and then applies a cross-entropy loss on the output. Since the text embedding is fixed, this embedding can be treated as a layer of linear classifier, whose weights are obtained from the language embedding. This image-text contrastive objective can also be interpreted as using cross-entropy loss on a fixed readout layer that is initialized with the right language knowledge. This further validates the importance of language information for zero-shot adversarial robustness.
A.6 ADAPTATION METHOD FORMULATION
Token-Level Visual Prompts. Token-level visual prompts adapt transformer-based models by appending tokens to the input token sequence. This is the most effective prompt method in our experiments, where we use this by default unless specified. Our visual prompts append additional tokens Pk to the input sequence x of the vision transformer:
x = [x;P0, P1, ..., Pk] (9)
The remaining transformer parameters and computations are kept the same as the original.
Image-Level Visual Prompts. Image-level visual prompts adapt transformer models by adding prompt to the input pixels. This is less effective way of adding prompt, as discussed in Figure 6a. Let the prompt token be P , and input image be x the visual prompt is added to the input image:
x = x+ P (10) The remaining transformer parameters and computations are kept the same as the original.
Finetuning. This is the standard way of adapting a model, where all parameters of the model is updated with relatively small learning rate. In our experiments, we find learning rate of 1e − 5 achieves the best performance. | 1. What is the focus of the paper regarding zero-shot adversarial robustness?
2. What are the strengths of the proposed approach, particularly in terms of the conjecture about language encoders?
3. What are the weaknesses of the paper regarding the proposed loss function and adaptation methods?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper studies the problem of zero-shot adversarial robustness: adapting pretrained large-scale vision-language models to unseen target tasks with high robust accuracies. With the conjecture that language encoder plays an important role in achieving good zero-shot generalization ability, a contrastive based adversarial training objective is proposed to contrast between image and text embeddings. Several adaptation methods are analyzed, and some interesting discoveries are made for the visual prompt tunning method.
Strengths And Weaknesses
Strength:
The proposed problem is new and under-explored.
The paper conducts rich experiments to compare diverse adaptations methods and several possible training loss functions. Many datasets are employed to evaluate zero-shot generalization.
The proposed loss function is well motivated by the observation that text encoder is important to zero-shot transferability. Ablation study verifies that the proposed method using text embedding for contrastive learning is important.
Weakness:
Although the setting is new, the proposed loss function is a direct use of contrastive loss for two modalities. Moreover, the adaptation methods used are all existing techniques.
Implementation details are not sufficient. For example, it is unclear how exactly the adaptation methods are employed during training and what the objective functions of CoAdv or ImgCoAdv are. It would be better to give mathematical formulations of these methods in appendix.
Clarity, Quality, Novelty And Reproducibility
Most explanations of the proposed method are clear and are presented with high quality. The technique is not novel and some implementation details are not given. |
ICLR | Title
Understanding Zero-shot Adversarial Robustness for Large-Scale Models
Abstract
Pretrained large-scale vision-language models like CLIP have exhibited strong generalization over unseen tasks. Yet imperceptible adversarial perturbations can significantly reduce CLIP’s performance on new tasks. In this work, we identify and explore the problem of adapting large-scale models for zero-shot adversarial robustness. We first identify two key factors during model adaption—training losses and adaptation methods—that affect the model’s zero-shot adversarial robustness. We then propose a text-guided contrastive adversarial training loss, which aligns the text embeddings and the adversarial visual features with contrastive learning on a small set of training data. We apply this training loss to two adaption methods, model finetuning and visual prompt tuning. We find that visual prompt tuning is more effective in the absence of texts, while finetuning wins in the existence of text guidance. Overall, our approach significantly improves the zero-shot adversarial robustness over CLIP, seeing an average improvement of 31 points over ImageNet and 15 zero-shot datasets. Our code and model is available at github.com/cvlab-columbia/ZSRobust4FoundationModel.
1 INTRODUCTION
Large-scale models trained on vision and language data—also known as foundation models— have emerged as a universal backbone for tackling many recognition problems in computer vision (Jia et al., 2021; Radford et al., 2021), graphics (Ramesh et al., 2022) and robotics (Ahn et al., 2022). One of the key advantages of foundation models is zero-shot generalization, where the models use just a single textual description to recognize new visual categories with high accuracy. Since those large-scale models are powerful, they will continue to be used in many critical applications, where it is important to make them reliable. However, robustness under adversarial examples remains a challenge, where an imperceptible pattern can be combined with the image to cause recognition failures (Croce & Hein, 2020; Carlini & Wagner, 2017; Dong et al., 2018; Szegedy et al., 2013; Moosavi-Dezfooli et al., 2016), where attack on foundation models can consequently corrupt the downstream applications.
Due to the importance of this problem, there is a large literature that investigates adversarial robustness for neural networks. The most common approach for adversarial defense is to learn the model through adversarial training (Madry et al., 2018; Mao et al., 2019; Szegedy et al., 2013; Pang et al., 2020; Rice et al., 2020; Uesato et al., 2019), which involves augmenting the training set with mined adversarial examples that fool the image classifier. Adversarial training has been validated to improve robustness on the task that the mined examples come from, but it often comes at a cost of generalization (Stutz et al., 2019; Su et al., 2018; Pedraza et al., 2021). However, our world is vast and naturally open, and only evaluating adversarial robustness on the learned tasks is limited. Can we achieve zero-shot transferability for adversarial robustness, even if the model has never been trained on the unknown tasks?
In this paper, we study this important yet under-explored problem, zero-shot adversarial robustness of large-scale vision-language models. We start our investigation with the state-of-the-art CLIP model (Radford et al., 2021), which has been shown to be effective in zero-shot recognition tasks. We find that simply adding an imperceptible vector to the image (≤ 1/255) can subvert
∗Equal contribution
CLIP’s prediction (see Figure 1a). If we follow the standard adversarial training defense paradigm (Madry et al., 2018; Rice et al., 2020) to finetune CLIP on the ImageNet (Deng et al., 2009b) training set, we observe that the adapted CLIP has improved adversarial robustness on the ImageNet validation set, but comes at the cost of significantly reduced accuracy on unseen datasets and classes (Figure 1b). Standard adversarial training backfires on CLIP as it fails to retain the model’s zero-shot generalization ability.
Adaptation methods and training objectives are the two major factors for adapting a large-scale model. First, besides finetuning the whole model, we seek an alternative adaptation method—visual prompt tuning—which adapts the inputs instead of the parameters of the model. Visual prompt tuning (VPT) is an emerging light-weight adaptation method (Bar et al., 2022; Bahng et al., 2022) that learns a visual prompt which is added to the input image, where we use visual prompt to instruct the model to be robust against adversaries. Second, we find that the standard adversarial training objective ignores the visual-language alignment in CLIP’s pretrained representation space, causing the model to lose zero-shot capability. We then propose a text-guided contrastive adversarial training (TeCoA) loss, dubbed as Tekoa (tee·kow), which maximizes the similarity of the adversarial visual features and the correct text embeddings with contrastive learning. Since the adapted visual features continue to align well with the text features, the model adapted with TeCoA can maximally retain the original zero-shot generalization of CLIP while enjoying improved adversarial robustness.
We conduct an extensive evaluation on 15 zero-shot image datasets, offering a holistic study of the zero-shot adversarial robustness problem. This is especially important given that large-scale vision models are emerging as infrastructure and are deploying to critical applications. We find that the lightweight VPT is noticeably more effective than model finetuning when textual information is unavailable. When texts are used during adaptation, both VPT and finetuning using our TeCoA loss have drastically improved zero-shot adversarial robustness compared to baselines. Finetuning has higher gains than VPT as more parameters are tuned. Our best performing model with the TeCoA loss can improve adversarial robustness over CLIP by an average of 31% across the datasets. Our method also works on unlabeled images, allowing for better robustness with a large amount of unlabeled data. Our work establish a new and important benchmarket, zero-shot adversarial robustness, for future work to evaluate on. We release all models and code.
2 RELATED WORK
Zero-Shot Generalization aims to classify novel classes and tasks that are unseen during training (Palatucci et al., 2009; Lampert et al., 2009; Radford et al., 2021). Existing zero-shot methods often project visual features into semantic feature space (Frome et al., 2013; Akata et al., 2015; Romera-Paredes & Torr, 2015; Xie et al., 2019; Yu et al., 2018; Liu et al., 2019), or use generative methods to generate fake visual features of unseen classes from their semantic descriptions to train classifiers (Xian et al., 2018; Ni et al., 2019; Huang et al., 2019; Schonfeld et al., 2019; Verma et al., 2019; Liu et al., 2020). Recently, large-scale pretrained vision-language models (Radford et al., 2021; Jia et al., 2021) have shown outstanding zero-shot generalization ability on unseen tasks via text prompt engineering. Their adversarial robustness and its transferability, however, has not been studied in the zero-shot setting.
Adversarial Robustness. Adversarial attacks for image recognition find an additive vector on the input to maximize the cross-entropy loss, which is calculated from the model prediction and the ground truth one-hot label (Szegedy et al. (2013); Athalye et al. (2018); Carlini & Wagner (2017);
Kurakin et al. (2017); Papernot et al. (2015); Moosavi-Dezfooli et al. (2016)). Adversarial training (Madry et al., 2018) and its variants (Zhang et al., 2019; Rice et al., 2020), which train the model on generated adversarial examples, are effective in improving adversarial robustness on the task that they have been trained on. However, it is unclear whether this approach can improve robustness in the zero-shot scenario. To the best of our knowledge, Yucel et al. (2020) is so far the only work that studies the adversarial robustness of zero-shot learning models. Their setting is limited because it relies on predicting robust attributes, which may not be easily available for many tasks.
Transferability of Robustness. Mao et al. (2020) shows that adversarial robustness is transferable across tasks when multiple tasks are trained together. Salman et al. (2020) shows that adversarially robust models transfer better than their standard-trained counterparts when they are finetuned on other tasks. Chan et al. (2020) finds that matching a robust model’s gradient can transfer robustness to a new model. Vaishnavi et al. (2022) proposes a low-cost method to transfer robustness to a new model on the same task with different architecture. However, the transferability of adversarial robustness to zero-shot tasks has not been investigated.
Contrastive Learning (Oord et al., 2018) has been used to train large-scale image-language models (Jia et al., 2021; Radford et al., 2021). Kim et al. (2020); Jiang et al. (2020) propose instancewise adversarial perturbation and use a contrastive loss to align the features of clean examples and generated adversarial examples. Our method is the first one to introduce a cross-modal image-text contrastive loss in adversarial contrastive learning.
Adapting Pretrained Models. Linear probing and finetuning are the two major ways to adapt deep pretrained models. Recently, a more lightweight adaptation method, prompt tuning, has been proposed (Zhou et al., 2022c;b;a). Shu et al. (2022) shows that test-time optimization for text prompting helps generalization. Bar et al. (2022) combines target task and image inpainting to achieve zeroshot task inference. Jia et al. (2022); Sandler et al. (2022) used visual prompting to replace the finetuning procedure for large-scale models. Bahng et al. (2022) optimizes a visual prompt to increase the performance on the same task that the model finetunes the visual prompt on. Mao et al. (2021); Lawhon et al. (2022); Mao et al. (2022) find input prompt with self-supervised objective for robustness. Prompt tuning for continuous learning has also been proposed (Conder et al., 2022). Liu et al. (2022) proposes an amortized approach to use fewer iterations to adapt models. Wang et al. (2019) showed that it is useful in improving the performance for healthcare, and Liu et al. showed it can be used to adapt generative models for solving under constrained problems. However, adapting large-scale models for transferable zero-shot robustness, using methods such as finetuning or visual prompting, has not been investigated.
3 MODEL ADAPTATION FOR ZERO-SHOT ADVERSARIAL ROBUSTNESS
We first give background in Section 3.1 on adversarial attacks, adversarial training, and the problem setup. In Section 3.2, we discuss adaptation methods for adapting large-scale models for zero-shot adversarial robustness. Finally, Section 3.3, we discuss the effect of different training losses to motivate and then introduce our proposed text-guided contrastive adversarial training (TeCoA) loss.
3.1 BACKGROUND AND PROBLEM SETUP
Let Fθ(·) be a deep model for image recognition parameterized by θ. Given an input image x, the model produces a representation ŷ = Fθ(x). For image classification, a standard model learns to minimize the cross-entropy loss, L(x,y) = H(Fθ(x),y), where the label y is often represented by a one-hot vector.
Adversarial Attacks. An adversary (i.e., attacker) typically optimizes for an additive transformation δ to the image, xa = x+ δ, which can fool the model Fθ to make incorrect predictions:
xa = argmax xa
L(xa,y), s.t. ||xa − x||q ≤ ϵ. (1)
Here, the magnitude of the added pattern is bounded by a q-norm ball of radius ϵ, making the attack perceptually invisible. The role of a defender is to find ways to correct the influence of the attacks and ensure that the model is robust to the adversarial perturbations.
Zero-Shot Adversarial Robustness. Large-scale vision-language models generalize well to new tasks and datasets at test time. However, the zero-shot transferability of adversarial robustness of these models is less explored. In our zero-shot adversarial robustness setup, we study the worst case: we assume that the attacker has the significant advantage of unrestricted access to the ground truth of new tasks at test time, while the defender has no access. While the attacker can directly optimize for an attacked image that fools the model, the defender needs to be robust on all kinds of unseen data and tasks. Compared to the commonly-used robustness setting, which only evaluates robustness on the trained task, our setup is more challenging.
Adversarial Training is the common strategy to improve a model’s adversarial robustness. By retraining the model on mined adversarial examples, the model is incentivized to learn features invariant under adversarial transformations. The defender often optimizes for the objective
θ = argmin θ
L(Fθ(xa),y), (2)
so that the model Fθ still makes correct predictions on the generated adversarial examples. As shown in Figure 1, the vanilla adversarial training approach is effective on seen tasks but fails when the model is evaluated on attacked data from unseen tasks.
3.2 ADAPTING THE LARGE-SCALE MODELS
One of the key advantages of large-scale pretrained models is that the features of these models are generalizable to various tasks without full retraining. Thus, we would like to make a lightweight adjustment to the models with a small set of training data of attacked images on known tasks, and maximally retain the zero-shot generalization ability of these large models while improving their adversarial robustness on new tasks. We adopt CLIP, one of the best performing vision-language models for zero-shot recognition, as our base model, and investigate a few adaptation strategies as shown in Figure 2 to improve the zero-shot adversarial robustness of CLIP.
Finetuning (FT). A typical way to adapt a pretrained model, finetuning, is to update the parameters of the model either entirely or partially. While this approach improves model robustness on the target distribution, Radford et al. (2021) and Pham et al. (2021) show that this improvement often comes at a cost of generalization; directly modifying model parameters may lead to a higher tendency towards overfitting.
Visual Prompting (VP). Recently, visual prompt tuning has emerged as a lightweight approach for adapting pretrained large-scale models. Originating from the natural language processing community, the key idea of prompt tuning is to identify prompts to decorate the inputs that effectively query the pretrained models. The computer vision community recently adopted this idea and is starting to explore approaches that can modify input images in pixel space to better adapt large-scale vision
models for downstream tasks. For transformer-based models, visual prompting methods either learn a token that is appended to the input token sequences (Bar et al., 2022) or learn a direct modification to the input images (Bahng et al., 2022) for adaptation, as shown by (d,e) in Figure 2.
In our study, we conduct an extensive analysis over both adaptation methods to better understand adapting large-scale models for zero-shot adversarial robustness.
3.3 TEXT-GUIDED ADVERSARIAL CONTRASTIVE ADVERSARIAL TRAINING
Through our initial experiments, we conjecture that the training objective might play a key role in improving the zero-shot adversarial robustness of the models. Radford et al. (2021) indicates that the zero-shot generalization ability of large-scale vision-language models may come from their language supervision. For example, CLIP learns a joint visual-and-text feature space, which helps zero-shot generalization at test time. If we simply finetune the visual encoder with one-hot labels, it may break the joint feature space and harm this zero-shot generalization ability. These observations motivate us to consider using the text information when generating the adversarial examples and also in the training objective during model adaptation.
We now introduce the Text-guided Contrastive Adversarial (TeCoA) training loss, to effectively incorporate text information. In contrast to prior contrastive learning which does image-to-image contrastive learning (Jiang et al., 2020; Kim et al., 2020), we consider a cross-modal text-to-image contrastive learning paradigm. We first generate a set of adversarial examples conditioned on the text inputs which are targeted to fool the model about the correspondence between image features and text embeddings. TeCoA then tries to minimize the feature distance between the attacked image and the correct corresponding text inputs contrastively (Figure 3). We provide additional discussion of this text-image contrastive objective in Appendix A.5.
Concretely, given a set of image and text pairs {(xi, ti)}, we use the pretrained CLIP model to encode each pair with an image encoder Fθ and a text encoder T . We then have the following image-to-text contrastive loss function,
Ls(x, t,y) = −Ei,j [ yij log exp(cos(z (I) i , z (T ) j )/τ)∑
k exp(cos(z (I) i , z (T ) k )/τ)
] , (3)
where the z(I)i = Fθ(xi) are the features of the input image, and z (T ) i = T (ti) are the features from the input text. We use yij to indicate which image-text pairs are positive and which are negative; this indicator satisfies yij = 1 if and only if the examples i = j, and 0 otherwise. τ is a scalar hyper-parameter, and cos denotes the cosine similarity function.
Constructing Adversarial Examples for Image-Text Pair. Instead of maximizing the standard cross-entropy loss, we maximize the image-text contrastive loss given a batch of natural images x and text t:
xa = argmax xa
Ls(xa, t,y), s.t. ||xa − x||q < ϵ. (4)
Here, for image xi in the batch, the associated text ti can be its natural label text or constructed via the standard prompts for the zero-shot tasks (e.g., “a photo of a [LABEL]”). The indicator yij = 1 when image xi has category j as its ground-truth class, and 0 otherwise. In practice, we find this objective is effective at generating adversarial examples for zero-shot image recognition tasks.
Text-Guided Contrastive Adversarial Training. Once we have the adversarial examples, we optimize the parameters θ of the vision encoder Fθ to minimize the aforementioned objective (Equa-
tion 8) on the generated adversarial examples,
θ = argmin θ
Ls(xa, t,y). (5)
Our algorithm iteratively alternates between generating adversarial examples and updating the model via Equation 5. Since TeCoA uses additional information from the text embeddings to correct the visual features corrupted by the adversarial attacks, it helps the model to retain zero-shot transferability regarding adversarial robustness.
Contrastive Adversarial Training without Text. To validate whether the gain of zero-shot robustness comes from language supervision or is due to the formulation of the contrastive loss itself, we consider two variants of contrastive losses which do not utilize languages in our experiments. One is a contrastive adversarial training loss with one-hot labels (CoAdv.), where the label information is encoded as a one-hot vector and is used to contrast with the image features. Another is based on Jiang et al. (2020), where the model is finetuned on adversarial examples to fool the image-toimage contrastive loss, denoted as ImgCoAdv. More details are presented in Section 4.
4 EXPERIMENTS
We start by describing our experiment setups and present the experimental results of 16 datasets in Section 4.1. We observe that CLIP adapted with TeCoA achieves an average of 31 points improvement across the datasets over the original CLIP model. In Section 4.2, we provide extensive analysis over the design choices involved in our approach and identify that the use of language in TeCoA largely improves the model’s zero-shot adversarial robustness.
Datasets. We evaluate the zero-shot adversarial robustness conferred by TeCoA trained on ImageNet (Deng et al., 2009a) and report the performance of the models on the ImageNet test set as well as 15 zero-shot test datasets, covering a diverse range of recognition tasks. Specifically, we include CIFAR10, CIFAR100 (Krizhevsky et al., 2009), STL10 (Coates et al., 2011), Caltech101 (Fei-Fei et al., 2004), and Caltech256 (Griffin et al., 2007) for generic classification; OxfordPets (Parkhi et al., 2012), StanfordCars (Krause et al., 2013), Food101 (Bossard et al., 2014), Flowers102 (Nilsback & Zisserman, 2008), and FGVCAircraft (Maji et al., 2013) for fine-grained classification; SUN397 (Xiao et al., 2010) for scene recognition; and DTD (Cimpoi et al., 2014) for texture recognition. Finally, we include three datasets with domain-specialized tasks, PatchCamelyon (PCAM, lymph node tumor detection) (Veeling et al., 2018), HatefulMemes (hatespeech detection) (Kiela et al., 2020), and EuroSAT (satellite image classification) (Helber et al., 2017).
Baselines. We consider multiple variants to adapt CLIP (Radford et al., 2021). For adaptation methods, we consider visual prompting (VP) (Bahng et al., 2022), linear probing (LP) and finetuning (FT). Since LP involves training a linear readout layer on the target task, it is not zero-shot, but we still evaluate it for reference. For training losses, we consider
(1) vanilla cross-entropy loss (CE); (2) standard adversarial training loss (Adv.) with the cross-entropy loss; (3) contrastive adversarial training loss (CoAdv.); (4) contrastive adversarial training over images (ImgCoAdv.); (5) our text-guided contrastive adversarial training (TeCoA).
In our experiments, we name these variants by adaptation(loss). Detailed formulations and associated training algorithms for each loss can be found in Section A.3.
Implementation Details. We use CLIP-B/32 architecture. We optimize the model using an SGD optimizer with momentum 0.9. We train for 10 epochs. For finetuning, we use a learning rate of 1e − 5. For visual prompt tuning, we use a learning rate of 40 and a batch size of 256. For prompt tuning on the entire ImageNet dataset, we use token-level prompt with size 200, while for subsets of ImageNet (1K, 5K, and 50K images), we use token-level prompt with size 5 and a smaller batch size of 64. Unless specified, during adversarial training, we generate Linf = 1/255 bounded attacks using a 2-step PGD attack (Madry et al., 2018) with step size α = 1/255. We test the robustness of our model using 100 steps of PGD attack, with step size α = 1/255.
4.1 EXPERIMENTAL RESULTS
We first use PGD attack with 100 steps to evaluate the robustness of CLIP models that are adapted on ImageNet. In Table 1, we show the robust accuracies for the models on the Imagenet test set and 15 zero-shot recognition tasks (i.e., the training classes and test classes are non-overlapping). Each row is the robust accuracy for one method, where we compare the proposed TeCoA with VP and FT against 10 baselines and their variants. We bold the best results for each dataset (column).
From Table 1, we can see that most adapted models with adversarial training losses except for ImgCoAdv. have better robust accuracies compared to the standard cross-entropy loss. If using adversarial training, the vanilla FT (Adv.), which finetunes the entire model with adversarial training, achieves the best adversarial robustness on ImageNet (non zero-shot data, same as the training tasks) while the average robust accuracy is 10.62%, which is only slightly better than the original CLIP (6.57%). In the meantime, VP(Adv.) achieves much better results, improving the
Im ag
eN et
CIF AR
10 ST L10
CIF AR
10 0
SU N3
97 Sta nfo rdC ars Foo d1 01 Ox for dP ets Flo we r10 2 DT D Eu roS AT FG VC Ai rcr aft
Ha
tef ulM em es
Pat
chC am ely on Ca lte ch1 01 Ca lte ch2 56
0
20
40
60
80
100
Ro bu
st A
cc ur
ac y
(% )
Im ag
eN et
CIF AR
10 ST L10
CIF AR
10 0
SU N3
97 Sta nfo rdC ars Foo d1 01 Ox for dP ets Flo we r10 2 DT D Eu roS AT FG VC Ai rcr aft
Ha
tef ulM em es
Pat
chC am ely on Ca lte ch1 01 Ca lte ch2 56
0
20
40
60
80
100
Cl ea
n Ac
cu ra
cy (%
)
= 1/255 = 2/255 = 4/255
Figure 5: Zero-shot adversarial robustness under different perturbation bounds (ϵ = 1, 2, 4/255). We vary the perturbation bound for adversarial finetuning with TeCoA. Each adapted model is evaluated under attacks from the same bound seen during training. We show both the robust accuracy (left) and clean accuracy (right). Our defense is still effective on zero-shot tasks when the perturbation gets larger.
Im ag
eN et
CIF AR
10 ST L10
CIF AR
10 0
SU N3
97 Sta nfo rdC ars Foo d1 01 Ox for dP ets Flo we r10 2 DT D Eu roS AT FG VC Ai rcr aft Ha tef ulM em es Pat chC am ely on Ca lte ch1 01 Ca lte ch2 56
0
20
40
60
80
100
Ro bu
st A
cc ur
ac y
(% ) Image Prompt (Added) Token Prompt (Appended)
(a) Visual Prompt Design
Im ag
eN et
CIF AR
10 ST L10
CIF AR
10 0
SU N3
97 Sta nfo rdC ars Foo d1 01 Ox for dP ets Flo we r10 2 DT D Eu roS AT FG VC Ai rcr aft Ha tef ulM em es Pat chC am ely on Ca lte ch1 01 Ca lte ch2 56
0
20
40
60
80
100
Ro bu
st A
cc ur
ac y
(% ) Partial Finetuning Visual Prompting
(b) Partially FT vs. VP
Figure 6: (a, left) We conduct an ablation study of whether to add visual prompt to the input image or to append prompt token to the input sequence. (b, right) We optimize the same amount of parameters in partial finetuning and VP, we find VP is more effective when only a small number of parameters are optimized.
accuracy number from 6.57% to 31.84%. This indicates that visual prompting is more powerful than finetuning when coupled with the standard adversarial training.
Within each set of the variants using the same adaptation method, we compare the effectiveness of different training losses. We notice that both CoAdv. and ImgCoAdv. are much worse than the proposed TeCoA and even the standard adversarial training (Adv.). This indicates that the formulation of contrastive learning may not necessarily help improve the zero-shot adversarial robustness and the use of the language supervision might.
Overall, we find that adapting CLIP with TeCoA using model finetuning presents the best performance across the datasets, improving the accuracy number from 6.57% to 38.18%, roughly 31 points. This might look counter-intuitive as VP significantly outperforms FT under the standard adversarial training without texts. One hypothesis is that with sufficient semantic information at test time, finetuning that directly modifies the model parameters may be more expressive than just modifying the inputs given more model parameters are tuned. In Table 3 in Appendix, we show the clean accuracy of the same models, which gives similar trend as the robust accuracies. We also show results under ϵ = 1/255 and ϵ = 4/255 using AutoAttack Croce & Hein (2020) in Table 4 and Table 5, where our model is still more robust than baselines, up to an average of 36 points. We describe details in Section A.2.
4.2 ANALYSIS
Training Set Size is an important factor when adapting large-scale models. In Figure 4, we show the results of adapting CLIP with TeCoA with 1, 5, 50, and 1000 shot per category on ImageNet. We observe that increasing the amount of training data improves robust accuracy. Moreover, we also find that FT(TeCoA) outperforms the non-adapted CLIP even when the model is adapted with just one shot per class (the blue bars in Figure 4).
Effect of Attack Strength. To validate whether our method works when the adversarial perturbations become larger, we increase the perturbation bound for our TeCoA adversarial training. Figure 5 shows that, while increasing the attack strength decreases the robust accuracy, our model can still transfer robustness to zero-shot tasks.
Visual Prompt Designs. There are two ways to design visual prompts. One is to append additional tokens to image tokens and the other is to add small decoration (i.e., learnable noises) to the in-
put images. From Figure 6a, we can see appending learnable tokens in the input token sequences achieves consistently better performance than just adding the prompt value to the images.
Number of Adapted Parameters. The number of parameters that are adapted during training highly affects model performance. VP is light-weight, as it only modifies the inputs, while FT may either adjust part of the model or the entire model. In Figure 6b, we show the comparison of partially finetuning and visual prompting of CLIP. We can see that with the same amount of parameters adapted, visual prompting is more effective in adapting the large scale model for zeroshot adversarial robustness.
Are Labels Required for Adapting CLIP? Unlabeled data is rich. We investigate if it’s necessary to use the groundtruth labels from images. We generate the pseudo text labels that CLIP retrieves from clean images, where we show details in Section A.3.5. We then run our TeCoA on the pseudo labels in the same way as the labelled data. We show experimental results in Table 2, where we obtain a similar zero-shot robust accuracy even without labels.
Trading off between Robust Accuracy and Clean Accuracy. Similar to typical adversarial training, TeCoA also poses a trade-off between the clean accuracy and the robust accuracy Tsipras et al. (2019). Ideally, we want to be able to dynamically adjust this trade-off depending on the desired level of adversarial robustness. Here we can balance this trade-off by using model weights interpolation (Wortsman et al., 2022). In Figure 7, we can see there is a sweet spot in interpolating the model where we improve both the robustness and clean accuracy, marked by the star in the figure.
5 CONCLUSION
In this paper, we provided a holistic study of the zero-shot adversarial robustness problem of largescale vision-language models. We identified the effects of various adaption methods and training losses when adapting models, and conjectured that existing methods failed to generalize to new tasks due to the lack of the language supervision. We proposed a text-guided contrastive adversarial training (TeCoA) which can be used with model finetuning and visual prompting to drastically improve the zero-shot adversarial robustness of CLIP. Extensive experimental evaluation showed the effectiveness of TeCoA, and the detailed analyses provide useful lessons for adapting large-scale models to improve their zero-shot adversarial robustness, shedding light on this important problem.
6 ACKNOWLEDGEMENT
This research is based on work partially supported by the DARPA SAIL-ON program, the DARPA MCS program, the NSF NRI Award #2132519, a GE/DARPA grant, a CAIT grant, and gifts from JP Morgan, DiDi, and Accenture.
Generative_Dual_Adversarial_Network_for_Generalized_Zero-Shot_
Learning_CVPR_2019_paper.html.
Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In International Conference on Machine Learning, pp. 4904–4916. PMLR, 2021.
Menglin Jia, Luming Tang, Bor-Chun Chen, Claire Cardie, Serge Belongie, Bharath Hariharan, and Ser-Nam Lim. Visual prompt tuning. arXiv preprint arXiv:2203.12119, 2022.
Ziyu Jiang, Tianlong Chen, Ting Chen, and Zhangyang Wang. Robust pre-training by adversarial contrastive learning. Advances in Neural Information Processing Systems, 33:16199–16210, 2020.
Douwe Kiela, Hamed Firooz, Aravind Mohan, Vedanuj Goswami, Amanpreet Singh, Pratik Ringshia, and Davide Testuggine. The hateful memes challenge: Detecting hate speech in multimodal memes. Advances in Neural Information Processing Systems, 33:2611–2624, 2020.
Minseon Kim, Jihoon Tack, and Sung Ju Hwang. Adversarial self-supervised contrastive learning. Advances in Neural Information Processing Systems, 33:2983–2994, 2020.
Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In 4th International IEEE Workshop on 3D Representation and Recognition (3dRR-13), Sydney, Australia, 2013.
Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. Adversarial examples in the physical world. CoRR, abs/1607.02533, 2017.
Christoph H. Lampert, Hannes Nickisch, and Stefan Harmeling. Learning to detect unseen object classes by between-class attribute transfer. In 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2009), 20-25 June 2009, Miami, Florida, USA, pp. 951–958. IEEE Computer Society, 2009. doi: 10.1109/CVPR.2009.5206594. URL https://doi.org/10.1109/CVPR.2009.5206594.
Matthew Lawhon, Chengzhi Mao, and Junfeng Yang. Using multiple self-supervised tasks improves model robustness. arXiv preprint arXiv:2204.03714, 2022.
Bo Liu, Qiulei Dong, and Zhanyi Hu. Zero-shot learning from adversarial feature residual to compact visual feature. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pp. 11547–11554. AAAI Press, 2020. URL https://ojs.aaai.org/index.php/AAAI/article/view/6821.
Ruoshi Liu, Sachit Menon, Chengzhi Mao, Dennis Park, Simon Stent, and Carl Vondrick. Shape analysis by shadow synthesis.
Ruoshi Liu, Chengzhi Mao, Purva Tendulkar, Hao Wang, and Carl Vondrick. Landscape learning for neural network inversion. arXiv e-prints, pp. arXiv–2206, 2022.
Yang Liu, Jishun Guo, Deng Cai, and Xiaofei He. Attribute attention for semantic disambiguation in zero-shot learning. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019, pp. 6697–6706. IEEE, 2019. doi: 10.1109/ICCV.2019.00680. URL https://doi.org/10.1109/ICCV.2019.00680.
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In ICLR, 2018.
S. Maji, J. Kannala, E. Rahtu, M. Blaschko, and A. Vedaldi. Fine-grained visual classification of aircraft. Technical report, 2013.
Chengzhi Mao, Ziyuan Zhong, Junfeng Yang, Carl Vondrick, and Baishakhi Ray. Metric learning for adversarial robustness. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
Chengzhi Mao, Amogh Gupta, Vikram Nitin, Baishakhi Ray, Shuran Song, Junfeng Yang, and Carl Vondrick. Multitask learning strengthens adversarial robustness. In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm (eds.), Computer Vision – ECCV 2020, pp. 158– 174, Cham, 2020. Springer International Publishing.
Chengzhi Mao, Mia Chiquier, Hao Wang, Junfeng Yang, and Carl Vondrick. Adversarial attacks are reversible with natural supervision. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 661–671, 2021.
Chengzhi Mao, Lingyu Zhang, Abhishek Joshi, Junfeng Yang, Hao Wang, and Carl Vondrick. Robust perception through equivariance, 2022. URL https://arxiv.org/abs/2212. 06079.
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks, 2016.
Jian Ni, Shanghang Zhang, and Haiyong Xie. Dual adversarial semantics-consistent network for generalized zero-shot learning. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc, Emily B. Fox, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pp. 6143–6154, 2019. URL https://proceedings.neurips.cc/paper/2019/hash/ c46482dd5d39742f0bfd417b492d0e8e-Abstract.html.
Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. In 2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing, pp. 722–729. IEEE, 2008.
Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018.
Mark Palatucci, Dean Pomerleau, Geoffrey E Hinton, and Tom M Mitchell. Zero-shot learning with semantic output codes. In Y. Bengio, D. Schuurmans, J. Lafferty, C. Williams, and A. Culotta (eds.), Advances in Neural Information Processing Systems, volume 22. Curran Associates, Inc., 2009. URL https://proceedings.neurips.cc/paper/2009/file/ 1543843a4723ed2ab08e18053ae6dc5b-Paper.pdf.
Tianyu Pang, Xiao Yang, Yinpeng Dong, Hang Su, and Jun Zhu. Bag of tricks for adversarial training, 2020.
Nicolas Papernot, Patrick D. McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Ananthram Swami. The limitations of deep learning in adversarial settings. arXiv:1511.07528, 2015.
Omkar M Parkhi, Andrea Vedaldi, Andrew Zisserman, and CV Jawahar. Cats and dogs. In 2012 IEEE conference on computer vision and pattern recognition, pp. 3498–3505. IEEE, 2012.
Anibal Pedraza, Oscar Deniz, and Gloria Bueno. On the relationship between generalization and robustness to adversarial examples. Symmetry, 13(5):817, 2021.
Hieu Pham, Zihang Dai, Golnaz Ghiasi, Hanxiao Liu, Adams Wei Yu, Minh-Thang Luong, Mingxing Tan, and Quoc V Le. Combined scaling for zero-shot transfer learning. arXiv preprint arXiv:2111.10050, 2021.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pp. 8748–8763. PMLR, 2021.
Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical textconditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 2022.
Leslie Rice, Eric Wong, and J. Zico Kolter. Overfitting in adversarially robust deep learning, 2020.
Bernardino Romera-Paredes and Philip H. S. Torr. An embarrassingly simple approach to zeroshot learning. In Francis R. Bach and David M. Blei (eds.), Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, volume 37 of JMLR Workshop and Conference Proceedings, pp. 2152–2161. JMLR.org, 2015. URL http://proceedings.mlr.press/v37/romera-paredes15.html.
Hadi Salman, Andrew Ilyas, Logan Engstrom, Ashish Kapoor, and Aleksander Madry. Do adversarially robust imagenet models transfer better? Advances in Neural Information Processing Systems, 33:3533–3545, 2020.
Mark Sandler, Andrey Zhmoginov, Max Vladymyrov, and Andrew Jackson. Fine-tuning image transformers using learnable memory. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12155–12164, 2022.
Edgar Schonfeld, Sayna Ebrahimi, Samarth Sinha, Trevor Darrell, and Zeynep Akata. Generalized zero-and few-shot learning via aligned variational autoencoders. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8247–8255, 2019.
Manli Shu, Weili Nie, De-An Huang, Zhiding Yu, Tom Goldstein, Anima Anandkumar, and Chaowei Xiao. Test-time prompt tuning for zero-shot generalization in vision-language models, 2022. URL https://arxiv.org/abs/2209.07511.
David Stutz, Matthias Hein, and Bernt Schiele. Disentangling adversarial robustness and generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
Dong Su, Huan Zhang, Hongge Chen, Jinfeng Yi, Pin-Yu Chen, and Yupeng Gao. Is robustness the cost of accuracy?–a comprehensive study on the robustness of 18 deep image classification models. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 631–648, 2018.
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv:1312.6199, 2013.
Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. Robustness may be at odds with accuracy, 2019.
Jonathan Uesato, Jean-Baptiste Alayrac, Po-Sen Huang, Robert Stanforth, Alhussein Fawzi, and Pushmeet Kohli. Are labels required for improving adversarial robustness? CoRR, 2019.
Pratik Vaishnavi, Kevin Eykholt, and Amir Rahmati. Transferring adversarial robustness through robust representation matching. arXiv preprint arXiv:2202.09994, 2022.
Bastiaan S Veeling, Jasper Linmans, Jim Winkens, Taco Cohen, and Max Welling. Rotation equivariant cnns for digital pathology. In International Conference on Medical image computing and computer-assisted intervention, pp. 210–218. Springer, 2018.
Vinay Kumar Verma, Dhanajit Brahma, and Piyush Rai. A meta-learning framework for generalized zero-shot learning. CoRR, abs/1909.04344, 2019. URL http://arxiv.org/abs/1909. 04344.
Hao Wang, Chengzhi Mao, Hao He, Mingmin Zhao, Tommi S Jaakkola, and Dina Katabi. Bidirectional inference networks: A class of deep bayesian networks for health profiling. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp. 766–773, 2019.
Mitchell Wortsman, Gabriel Ilharco, Jong Wook Kim, Mike Li, Simon Kornblith, Rebecca Roelofs, Raphael Gontijo Lopes, Hannaneh Hajishirzi, Ali Farhadi, Hongseok Namkoong, et al. Robust fine-tuning of zero-shot models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7959–7971, 2022.
Yongqin Xian, Tobias Lorenz, Bernt Schiele, and Zeynep Akata. Feature generating networks for zero-shot learning. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pp. 5542– 5551. Computer Vision Foundation / IEEE Computer Society, 2018. doi: 10.1109/CVPR. 2018.00581. URL http://openaccess.thecvf.com/content_cvpr_2018/html/ Xian_Feature_Generating_Networks_CVPR_2018_paper.html.
Jianxiong Xiao, James Hays, Krista A Ehinger, Aude Oliva, and Antonio Torralba. Sun database: Large-scale scene recognition from abbey to zoo. In 2010 IEEE computer society conference on computer vision and pattern recognition, pp. 3485–3492. IEEE, 2010.
Guo-Sen Xie, Li Liu, Xiaobo Jin, Fan Zhu, Zheng Zhang, Jie Qin, Yazhou Yao, and Ling Shao. Attentive region embedding network for zero-shot learning. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pp. 9384–9393. Computer Vision Foundation / IEEE, 2019. doi: 10.1109/CVPR.2019.00961. URL http://openaccess.thecvf.com/content_CVPR_2019/html/Xie_ Attentive_Region_Embedding_Network_for_Zero-Shot_Learning_CVPR_ 2019_paper.html.
Yunlong Yu, Zhong Ji, Yanwei Fu, Jichang Guo, Yanwei Pang, and Zhongfei (Mark) Zhang. Stacked semantics-guided attention model for fine-grained zero-shot learning. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pp. 5998–6007, 2018. URL https://proceedings.neurips.cc/paper/2018/hash/ 9087b0efc7c7acd1ef7e153678809c77-Abstract.html.
Mehmet Kerim Yucel, Ramazan Gokberk Cinbis, and Pinar Duygulu. A deep dive into adversarial robustness in zero-shot learning. In European Conference on Computer Vision, pp. 3–21. Springer, 2020.
Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P. Xing, Laurent El Ghaoui, and Michael I. Jordan. Theoretically principled trade-off between robustness and accuracy. arXiv abs/1901.08573, 2019.
Chunting Zhou, Junxian He, Xuezhe Ma, Taylor Berg-Kirkpatrick, and Graham Neubig. Prompt consistency for zero-shot task generalization. arXiv preprint arXiv:2205.00049, 2022a.
Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Conditional prompt learning for vision-language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16816–16825, 2022b.
Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Learning to prompt for visionlanguage models. International Journal of Computer Vision, 130(9):2337–2348, 2022c.
A APPENDIX
A.1 EXPERIMENTS
A.1.1 ZERO-SHOT CLEAN ACCURACY OF OUR ADAPTED MODEL
We show the results for accuracy on clean images in Table 3.
A.2 AUTOATTACK EXPERIMENT
We also consider a stronger attack AutoAttack Croce & Hein (2020) in our evaluation. Since our method uses adversarial training and does not rely on the obfuscated gradient, we use two APGD variants, APGD-CE and APGD-DLR, in AutoAttack to evaluate. We show robust accuracy under perturbation bound ϵ = 1/255 in Table 4 and robust accuracy under perturbation bound ϵ = 4/255 in Table 5. For both perturbation bounds, our method achieves higher robust accuracy than vanilla CLIP by up to 36 points on average.
Notably, on ϵ = 1/255, even evaluated under the stronger AutoAttack, the robustness accuracy of FT (TeCoA) is 37.02, which is still higher than all other baselines methods from Table 1 in robust accuracy, even though those baselines are evaluated under a weaker PGD100 attack. A larger perturbation bound ϵ = 4/255 makes the attack stronger, where our method still improves robustness by an average of 9 points. In addition, while the AutoAttack significantly reduces the robust accuracy of CLIP from 6.57 to 0.53, it only slightly decreases our TeCoA’s robust accuracy: 2.1 points for visual prompt tuning and 1.16 for finetuning model (see Table 1 and Table 4).
One reason for AutoAttack to be so effective in attacking vanilla CLIP than PGD100 is because it uses a fractional attack vector, which is not rounded by 1/255 during the inference. Images are often encoded via integer from 0 to 255, which allows only attack at the integer level. In the main paper, we use PGD attacks with step size 1 (if the image ranges from 0 to 1, then step size is proportionally 1/255) for 100 steps. Since there is no fractional attack value, the attack space is constrained and less effective. This is because the standard image inputs have value resolutions that should be larger than 0.5, and any values smaller than this would be rounded when encoding the images. If we ignore the fact that images will be encoded in integers between 0 to 255, then we can have stronger attacks by exploring the fraction values. Since the AutoAttack automatically reduces the attack step size when loss oscillates, it explores the fraction space and is more effective in the attack.
A.3 TRAINING LOSSES AND ALGORITHMS
We give formulations for the different training algorithms considered in our experiments. Throughout this section, let Fθ denote an image encoder parameterized by θ, and let T denote a frozen text encoder. Let D denote a dataset containing pairs (x, y) of images and their respective one-hot labels.
A.3.1 STANDARD ADVERSARIAL TRAINING WITH CROSS-ENTROPY LOSS (ADV.)
The standard adversarial training paradigm. We initialize a learnable linear layer Cϕ, and append it to Fθ. The classification loss is L(Cϕ(Fθ(xa)),y) defined in cross-entropy loss. We first train Cϕ on standard images. Then, given a natural image x with one-hot label y, we generate an attacked image xa by maximizing the loss L(Cϕ(Fθ(xa)),y). We then update θ to minimize the loss L(Cϕ(Fθ(xa)),y). We describe our algorithm in Algorithm 1.
Algorithm 1 Standard Adversarial Training (Adv.) Input: Dataset D, learnable parameter θ, model F , parameter of projector Cϕ
for all iter ∈ preset number of training epochs do for all x,y ∈ minibatch do
xa = argmaxxa L(Cϕ(Fθ(x a)),y) ▷ Generating adversarial attacks
θ = θ −∇θL(Cϕ(Fθ(xa)),y) ▷ Training on generated adversarial examples end for
end for
A.3.2 CONTRASTIVE ADVERSARIAL TRAINING LOSS (COADV.)
We study how much the contrastive learning objective contributes to the zero-shot robustness gain. Instead of using one-hot label y and cross-entropy loss in our objective, we create a dictionary of embeddings E by random initialization, where each embedding ei denotes the code representation for the category yi. We optimize the following contrastive learning loss:
Ls(x, E ,y) = −Ei,j [ yij log exp(cos(z (I) i , ej)/τ)∑
k exp(cos(z (I) i , ej)/τ)
] , (6)
where the z(I)i = Fθ(xi) are the features of the input image, and ej are the code representation from the dictionary. We use yij to indicate which image-code pairs are positive and which are negative; this indicator satisfies yij = 1 if and only if the examples i = j, and 0 otherwise. τ is a scalar hyper-parameter, and cos denotes the cosine similarity function. We describe our algorithm in Algorithm 2.
Algorithm 2 Contrastive Adversarial Training Loss (CoAdv.) Input: Dataset D, learnable parameter θ, model F
for all iter ∈ preset number of training epochs do for all x,y ∈ minibatch do
xa = argmaxxa Ls(x, E ,y) ▷ Generating adversarial attacks for contrastive loss θ = θ −∇θLs(x, E ,y) ▷ Contrastive learning on generated adversarial examples
end for end for
A.3.3 CONTRASTIVE ADVERSARIAL TRAINING OVER IMAGES (IMGCOADV.)
Prior work Jiang et al. (2020) uses image-only contrastive adversarial learning to obtain robustness. We adapt this method as a baseline to study whether using the knowledge from only images — not language — can achieve zero-shot robustness. For each image xi, we create a transformationed xj , and form the image pair (xi, xj).
We use the same visual encoder to embed the images xi and xj to obtain the features zi and zj . We then construct the following contrastive learning loss:
Ls(xi,xj ,y) = −Ei,j [ yij log
exp(cos(zi, zj)/τ)∑ k exp(cos(zi, zj)/τ)
] , (7)
where the zi = Fθ(xi) are the features of the input image. We use yij to indicate which image pairs are positive and which are negative; this indicator satisfies yij = 1 if and only if the images xi and xj are augmented from the same instance, and 0 otherwise. τ is a scalar hyper-parameter, and cos denotes the cosine similarity function.
Let zai = Fθ(x a i ), where x a i denotes the generated adversarial examples. Then we can obtain the adversarial examples via:
xai = argmax xai Ls(xi,xj ,y) = argmax xai
−Ei,j [ yij log
exp(cos(zai , zj)/τ)∑ k exp(cos(z a i , zj)/τ)
] , (8)
Once we generate the adversarial images, we conduct contrastive learning on adversarial images and the paired clean images using Equation 7.
We introduce our algorithm in Algorithm 3.
A.3.4 TEXT-GUIDED CONTRASTIVE ADVERSARIAL TRAINING (TECOA)
We describe the TeCoA training algorithm in Algorithm 4. We denote the learnable parameters to be θ. For the visual prompt tuning, θ is only the prompt vector. For the finetuning method, θ is the parameter of the whole model.
Algorithm 3 Contrastive Adversarial Training over Images (ImgCoAdv.) Input: Dataset D, learnable parameter θ, model F
for all iter ∈ preset number of training epochs do for all x ∈ minibatch do
xai = argmaxxai Ls(x a i ,xj ,y) ▷ Generating adversarial attacks for contrastive loss
θ = θ −∇θLs(xai ,xj ,y) ▷ Contrastive learning on generated adversarial examples end for
end for
Algorithm 4 TeCoA Training Input: Dataset D, learnable parameter θ, model F , text t
for all iter ∈ preset number of training epochs do for all x,y ∈ minibatch do
xa = argmaxx Ls(x, t,y) ▷ Generating adversarial attacks for contrastive loss θ = θ −∇θLs(xa, t,y) ▷ Contrastive learning on generated adversarial examples
end for end for
A.3.5 TECOA LEARNING ON UNLABELED DATA
Given an unlabeled image, we first provide a list of text using the possible category names:
A photo of a {Category Name}.
Since the unlabeled images are not attacked, CLIP can retrieve the nearest text embedding from the image embedding and use the text as the pseudo label for the image. We then conduct the TeCoA training on the images and their pseudo text label. We describe the algorithm below in Algorithm 5.
Algorithm 5 TeCoA Training on Unlabeled Data Input: Dataset D without label, learnable parameter θ, model F , text t.
for all iter ∈ preset number of training epochs do for all x ∈ minibatch B = {x1, . . . , xm} do
y = argminy Ls(x, t,y) ▷ Finding pseudo label for the clean images using CLIP xa = argmaxx Ls(x, t,y) ▷ Generating adversarial attacks for contrastive loss θ = θ −∇θLs(xa, t,y) ▷ Contrastive learning on generated adversarial examples
end for end for
A.4 DISCUSSION FOR RESULTS
In Table 1, our TeCoA performs better than existing methods except for LP(CE) and VPT(Adv.) on HateMemes and PCAM datasets. This is because the HateMemes dataset is a binary classification task for detecting hateful speech, which is a very different domain from the ImageNet classification. Since both LP(CE) and VPT (adv) only adapt a small number of parameters on the ImageNet set, the resulting model may overfit less to the Image recognition task, and just perform random guessing. Note that the 54% accuracy is close to random guessing 50%. In addition, PCAM is a binary classification for lymph nodes (medical image, https://github.com/basveeling/pcam), which is also a very different domain from ImageNet. Similar to HateMemes, adapting fewer parameters makes the model learn less and overfit less, where the 52.5% accuracy is close to random guessing 50%. Thus, both datasets remain a big challenge for all existing zero-shot robust classification tasks.
A.5 DISCUSSION FOR TECOA LOSS
In the main paper, we interpret our loss through the image-text contrastive objective, which first conducts a matrix multiplication between the image embedding and language embedding, and then applies a cross-entropy loss on the output. Since the text embedding is fixed, this embedding can be treated as a layer of linear classifier, whose weights are obtained from the language embedding. This image-text contrastive objective can also be interpreted as using cross-entropy loss on a fixed readout layer that is initialized with the right language knowledge. This further validates the importance of language information for zero-shot adversarial robustness.
A.6 ADAPTATION METHOD FORMULATION
Token-Level Visual Prompts. Token-level visual prompts adapt transformer-based models by appending tokens to the input token sequence. This is the most effective prompt method in our experiments, where we use this by default unless specified. Our visual prompts append additional tokens Pk to the input sequence x of the vision transformer:
x = [x;P0, P1, ..., Pk] (9)
The remaining transformer parameters and computations are kept the same as the original.
Image-Level Visual Prompts. Image-level visual prompts adapt transformer models by adding prompt to the input pixels. This is less effective way of adding prompt, as discussed in Figure 6a. Let the prompt token be P , and input image be x the visual prompt is added to the input image:
x = x+ P (10) The remaining transformer parameters and computations are kept the same as the original.
Finetuning. This is the standard way of adapting a model, where all parameters of the model is updated with relatively small learning rate. In our experiments, we find learning rate of 1e − 5 achieves the best performance. | 1. What is the focus of the paper regarding zero-shot image recognition models?
2. What are the strengths of the proposed TeCoA objective, particularly in adversarial training?
3. What are the weaknesses of the paper, specifically regarding the image-to-image loss used in ImgCoAdv experiments?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Can you provide more details about the ablations performed to verify the effectiveness of TeCoA? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
Authors are the first to show that zero-shot image recognition models built on top of CLIP are still susceptible to adversarial attacks, and that standard adversarial training objective, while effective at preventing adversarial attacks, destroys the rich image-language capability of CLIP, making the defended model useless for zero-shot recognition on different tasks/datasets. Authors propose TeCoA - a novel objective that takes the image-language nature of the model into account during adversarial training via visual prompt tuning (VPT) and contrastive learning. Authors show how the proposed objective interacts with various task adaptation techniques, including linear probing, full and partial finetuning, image and token-level prompting. More specifically, authors propose to perform an adversarial attack on the standard contrastive image-text alignment objective used in CLIP - an adversarial image perturbation aims to make the correct image-text pair to have lower cosine similarity then an incorrect one. Authors propose two ablations that verifies that observed gains indeed come from robustifying the vision-language model, and not from robustifying the vision backbone itself - contrastive alignment to one-hot vectors and an image-to-image loss. Authors show that when it comes to zero-shot adversarial robustness, the proposed approach beats the baseline adversarial training and both ablations, and that finetuning outperforms visual prompt tuning. Authors also investigate the effect of the dataset size, attack strength, prompt design, number of adapted parameters during fine-tuning, and the of pareto optimum the clear-vs-robust accuracy.
Strengths And Weaknesses
The paper is very well written and easy to follow: the motivation is clear, the objective and the training procedure are clearly defined. All experiments and ablations are clearly motivated, described, and reported as well. The results are promising. The provided intuition for why the proposed method helps while baselines fail makes intuitive sense.
The only part of the paper that I found to be not entirely clear and self-contained is the specifics of the image-to-image loss used in ImgCoAdv experiments.
Clarity, Quality, Novelty And Reproducibility
The paper is very well written and should be reproducible from the description in the paper alone. I lack the background in zero-shot learning to judge the novelty of this work compared to prior work. |
ICLR | Title
Understanding Zero-shot Adversarial Robustness for Large-Scale Models
Abstract
Pretrained large-scale vision-language models like CLIP have exhibited strong generalization over unseen tasks. Yet imperceptible adversarial perturbations can significantly reduce CLIP’s performance on new tasks. In this work, we identify and explore the problem of adapting large-scale models for zero-shot adversarial robustness. We first identify two key factors during model adaption—training losses and adaptation methods—that affect the model’s zero-shot adversarial robustness. We then propose a text-guided contrastive adversarial training loss, which aligns the text embeddings and the adversarial visual features with contrastive learning on a small set of training data. We apply this training loss to two adaption methods, model finetuning and visual prompt tuning. We find that visual prompt tuning is more effective in the absence of texts, while finetuning wins in the existence of text guidance. Overall, our approach significantly improves the zero-shot adversarial robustness over CLIP, seeing an average improvement of 31 points over ImageNet and 15 zero-shot datasets. Our code and model is available at github.com/cvlab-columbia/ZSRobust4FoundationModel.
1 INTRODUCTION
Large-scale models trained on vision and language data—also known as foundation models— have emerged as a universal backbone for tackling many recognition problems in computer vision (Jia et al., 2021; Radford et al., 2021), graphics (Ramesh et al., 2022) and robotics (Ahn et al., 2022). One of the key advantages of foundation models is zero-shot generalization, where the models use just a single textual description to recognize new visual categories with high accuracy. Since those large-scale models are powerful, they will continue to be used in many critical applications, where it is important to make them reliable. However, robustness under adversarial examples remains a challenge, where an imperceptible pattern can be combined with the image to cause recognition failures (Croce & Hein, 2020; Carlini & Wagner, 2017; Dong et al., 2018; Szegedy et al., 2013; Moosavi-Dezfooli et al., 2016), where attack on foundation models can consequently corrupt the downstream applications.
Due to the importance of this problem, there is a large literature that investigates adversarial robustness for neural networks. The most common approach for adversarial defense is to learn the model through adversarial training (Madry et al., 2018; Mao et al., 2019; Szegedy et al., 2013; Pang et al., 2020; Rice et al., 2020; Uesato et al., 2019), which involves augmenting the training set with mined adversarial examples that fool the image classifier. Adversarial training has been validated to improve robustness on the task that the mined examples come from, but it often comes at a cost of generalization (Stutz et al., 2019; Su et al., 2018; Pedraza et al., 2021). However, our world is vast and naturally open, and only evaluating adversarial robustness on the learned tasks is limited. Can we achieve zero-shot transferability for adversarial robustness, even if the model has never been trained on the unknown tasks?
In this paper, we study this important yet under-explored problem, zero-shot adversarial robustness of large-scale vision-language models. We start our investigation with the state-of-the-art CLIP model (Radford et al., 2021), which has been shown to be effective in zero-shot recognition tasks. We find that simply adding an imperceptible vector to the image (≤ 1/255) can subvert
∗Equal contribution
CLIP’s prediction (see Figure 1a). If we follow the standard adversarial training defense paradigm (Madry et al., 2018; Rice et al., 2020) to finetune CLIP on the ImageNet (Deng et al., 2009b) training set, we observe that the adapted CLIP has improved adversarial robustness on the ImageNet validation set, but comes at the cost of significantly reduced accuracy on unseen datasets and classes (Figure 1b). Standard adversarial training backfires on CLIP as it fails to retain the model’s zero-shot generalization ability.
Adaptation methods and training objectives are the two major factors for adapting a large-scale model. First, besides finetuning the whole model, we seek an alternative adaptation method—visual prompt tuning—which adapts the inputs instead of the parameters of the model. Visual prompt tuning (VPT) is an emerging light-weight adaptation method (Bar et al., 2022; Bahng et al., 2022) that learns a visual prompt which is added to the input image, where we use visual prompt to instruct the model to be robust against adversaries. Second, we find that the standard adversarial training objective ignores the visual-language alignment in CLIP’s pretrained representation space, causing the model to lose zero-shot capability. We then propose a text-guided contrastive adversarial training (TeCoA) loss, dubbed as Tekoa (tee·kow), which maximizes the similarity of the adversarial visual features and the correct text embeddings with contrastive learning. Since the adapted visual features continue to align well with the text features, the model adapted with TeCoA can maximally retain the original zero-shot generalization of CLIP while enjoying improved adversarial robustness.
We conduct an extensive evaluation on 15 zero-shot image datasets, offering a holistic study of the zero-shot adversarial robustness problem. This is especially important given that large-scale vision models are emerging as infrastructure and are deploying to critical applications. We find that the lightweight VPT is noticeably more effective than model finetuning when textual information is unavailable. When texts are used during adaptation, both VPT and finetuning using our TeCoA loss have drastically improved zero-shot adversarial robustness compared to baselines. Finetuning has higher gains than VPT as more parameters are tuned. Our best performing model with the TeCoA loss can improve adversarial robustness over CLIP by an average of 31% across the datasets. Our method also works on unlabeled images, allowing for better robustness with a large amount of unlabeled data. Our work establish a new and important benchmarket, zero-shot adversarial robustness, for future work to evaluate on. We release all models and code.
2 RELATED WORK
Zero-Shot Generalization aims to classify novel classes and tasks that are unseen during training (Palatucci et al., 2009; Lampert et al., 2009; Radford et al., 2021). Existing zero-shot methods often project visual features into semantic feature space (Frome et al., 2013; Akata et al., 2015; Romera-Paredes & Torr, 2015; Xie et al., 2019; Yu et al., 2018; Liu et al., 2019), or use generative methods to generate fake visual features of unseen classes from their semantic descriptions to train classifiers (Xian et al., 2018; Ni et al., 2019; Huang et al., 2019; Schonfeld et al., 2019; Verma et al., 2019; Liu et al., 2020). Recently, large-scale pretrained vision-language models (Radford et al., 2021; Jia et al., 2021) have shown outstanding zero-shot generalization ability on unseen tasks via text prompt engineering. Their adversarial robustness and its transferability, however, has not been studied in the zero-shot setting.
Adversarial Robustness. Adversarial attacks for image recognition find an additive vector on the input to maximize the cross-entropy loss, which is calculated from the model prediction and the ground truth one-hot label (Szegedy et al. (2013); Athalye et al. (2018); Carlini & Wagner (2017);
Kurakin et al. (2017); Papernot et al. (2015); Moosavi-Dezfooli et al. (2016)). Adversarial training (Madry et al., 2018) and its variants (Zhang et al., 2019; Rice et al., 2020), which train the model on generated adversarial examples, are effective in improving adversarial robustness on the task that they have been trained on. However, it is unclear whether this approach can improve robustness in the zero-shot scenario. To the best of our knowledge, Yucel et al. (2020) is so far the only work that studies the adversarial robustness of zero-shot learning models. Their setting is limited because it relies on predicting robust attributes, which may not be easily available for many tasks.
Transferability of Robustness. Mao et al. (2020) shows that adversarial robustness is transferable across tasks when multiple tasks are trained together. Salman et al. (2020) shows that adversarially robust models transfer better than their standard-trained counterparts when they are finetuned on other tasks. Chan et al. (2020) finds that matching a robust model’s gradient can transfer robustness to a new model. Vaishnavi et al. (2022) proposes a low-cost method to transfer robustness to a new model on the same task with different architecture. However, the transferability of adversarial robustness to zero-shot tasks has not been investigated.
Contrastive Learning (Oord et al., 2018) has been used to train large-scale image-language models (Jia et al., 2021; Radford et al., 2021). Kim et al. (2020); Jiang et al. (2020) propose instancewise adversarial perturbation and use a contrastive loss to align the features of clean examples and generated adversarial examples. Our method is the first one to introduce a cross-modal image-text contrastive loss in adversarial contrastive learning.
Adapting Pretrained Models. Linear probing and finetuning are the two major ways to adapt deep pretrained models. Recently, a more lightweight adaptation method, prompt tuning, has been proposed (Zhou et al., 2022c;b;a). Shu et al. (2022) shows that test-time optimization for text prompting helps generalization. Bar et al. (2022) combines target task and image inpainting to achieve zeroshot task inference. Jia et al. (2022); Sandler et al. (2022) used visual prompting to replace the finetuning procedure for large-scale models. Bahng et al. (2022) optimizes a visual prompt to increase the performance on the same task that the model finetunes the visual prompt on. Mao et al. (2021); Lawhon et al. (2022); Mao et al. (2022) find input prompt with self-supervised objective for robustness. Prompt tuning for continuous learning has also been proposed (Conder et al., 2022). Liu et al. (2022) proposes an amortized approach to use fewer iterations to adapt models. Wang et al. (2019) showed that it is useful in improving the performance for healthcare, and Liu et al. showed it can be used to adapt generative models for solving under constrained problems. However, adapting large-scale models for transferable zero-shot robustness, using methods such as finetuning or visual prompting, has not been investigated.
3 MODEL ADAPTATION FOR ZERO-SHOT ADVERSARIAL ROBUSTNESS
We first give background in Section 3.1 on adversarial attacks, adversarial training, and the problem setup. In Section 3.2, we discuss adaptation methods for adapting large-scale models for zero-shot adversarial robustness. Finally, Section 3.3, we discuss the effect of different training losses to motivate and then introduce our proposed text-guided contrastive adversarial training (TeCoA) loss.
3.1 BACKGROUND AND PROBLEM SETUP
Let Fθ(·) be a deep model for image recognition parameterized by θ. Given an input image x, the model produces a representation ŷ = Fθ(x). For image classification, a standard model learns to minimize the cross-entropy loss, L(x,y) = H(Fθ(x),y), where the label y is often represented by a one-hot vector.
Adversarial Attacks. An adversary (i.e., attacker) typically optimizes for an additive transformation δ to the image, xa = x+ δ, which can fool the model Fθ to make incorrect predictions:
xa = argmax xa
L(xa,y), s.t. ||xa − x||q ≤ ϵ. (1)
Here, the magnitude of the added pattern is bounded by a q-norm ball of radius ϵ, making the attack perceptually invisible. The role of a defender is to find ways to correct the influence of the attacks and ensure that the model is robust to the adversarial perturbations.
Zero-Shot Adversarial Robustness. Large-scale vision-language models generalize well to new tasks and datasets at test time. However, the zero-shot transferability of adversarial robustness of these models is less explored. In our zero-shot adversarial robustness setup, we study the worst case: we assume that the attacker has the significant advantage of unrestricted access to the ground truth of new tasks at test time, while the defender has no access. While the attacker can directly optimize for an attacked image that fools the model, the defender needs to be robust on all kinds of unseen data and tasks. Compared to the commonly-used robustness setting, which only evaluates robustness on the trained task, our setup is more challenging.
Adversarial Training is the common strategy to improve a model’s adversarial robustness. By retraining the model on mined adversarial examples, the model is incentivized to learn features invariant under adversarial transformations. The defender often optimizes for the objective
θ = argmin θ
L(Fθ(xa),y), (2)
so that the model Fθ still makes correct predictions on the generated adversarial examples. As shown in Figure 1, the vanilla adversarial training approach is effective on seen tasks but fails when the model is evaluated on attacked data from unseen tasks.
3.2 ADAPTING THE LARGE-SCALE MODELS
One of the key advantages of large-scale pretrained models is that the features of these models are generalizable to various tasks without full retraining. Thus, we would like to make a lightweight adjustment to the models with a small set of training data of attacked images on known tasks, and maximally retain the zero-shot generalization ability of these large models while improving their adversarial robustness on new tasks. We adopt CLIP, one of the best performing vision-language models for zero-shot recognition, as our base model, and investigate a few adaptation strategies as shown in Figure 2 to improve the zero-shot adversarial robustness of CLIP.
Finetuning (FT). A typical way to adapt a pretrained model, finetuning, is to update the parameters of the model either entirely or partially. While this approach improves model robustness on the target distribution, Radford et al. (2021) and Pham et al. (2021) show that this improvement often comes at a cost of generalization; directly modifying model parameters may lead to a higher tendency towards overfitting.
Visual Prompting (VP). Recently, visual prompt tuning has emerged as a lightweight approach for adapting pretrained large-scale models. Originating from the natural language processing community, the key idea of prompt tuning is to identify prompts to decorate the inputs that effectively query the pretrained models. The computer vision community recently adopted this idea and is starting to explore approaches that can modify input images in pixel space to better adapt large-scale vision
models for downstream tasks. For transformer-based models, visual prompting methods either learn a token that is appended to the input token sequences (Bar et al., 2022) or learn a direct modification to the input images (Bahng et al., 2022) for adaptation, as shown by (d,e) in Figure 2.
In our study, we conduct an extensive analysis over both adaptation methods to better understand adapting large-scale models for zero-shot adversarial robustness.
3.3 TEXT-GUIDED ADVERSARIAL CONTRASTIVE ADVERSARIAL TRAINING
Through our initial experiments, we conjecture that the training objective might play a key role in improving the zero-shot adversarial robustness of the models. Radford et al. (2021) indicates that the zero-shot generalization ability of large-scale vision-language models may come from their language supervision. For example, CLIP learns a joint visual-and-text feature space, which helps zero-shot generalization at test time. If we simply finetune the visual encoder with one-hot labels, it may break the joint feature space and harm this zero-shot generalization ability. These observations motivate us to consider using the text information when generating the adversarial examples and also in the training objective during model adaptation.
We now introduce the Text-guided Contrastive Adversarial (TeCoA) training loss, to effectively incorporate text information. In contrast to prior contrastive learning which does image-to-image contrastive learning (Jiang et al., 2020; Kim et al., 2020), we consider a cross-modal text-to-image contrastive learning paradigm. We first generate a set of adversarial examples conditioned on the text inputs which are targeted to fool the model about the correspondence between image features and text embeddings. TeCoA then tries to minimize the feature distance between the attacked image and the correct corresponding text inputs contrastively (Figure 3). We provide additional discussion of this text-image contrastive objective in Appendix A.5.
Concretely, given a set of image and text pairs {(xi, ti)}, we use the pretrained CLIP model to encode each pair with an image encoder Fθ and a text encoder T . We then have the following image-to-text contrastive loss function,
Ls(x, t,y) = −Ei,j [ yij log exp(cos(z (I) i , z (T ) j )/τ)∑
k exp(cos(z (I) i , z (T ) k )/τ)
] , (3)
where the z(I)i = Fθ(xi) are the features of the input image, and z (T ) i = T (ti) are the features from the input text. We use yij to indicate which image-text pairs are positive and which are negative; this indicator satisfies yij = 1 if and only if the examples i = j, and 0 otherwise. τ is a scalar hyper-parameter, and cos denotes the cosine similarity function.
Constructing Adversarial Examples for Image-Text Pair. Instead of maximizing the standard cross-entropy loss, we maximize the image-text contrastive loss given a batch of natural images x and text t:
xa = argmax xa
Ls(xa, t,y), s.t. ||xa − x||q < ϵ. (4)
Here, for image xi in the batch, the associated text ti can be its natural label text or constructed via the standard prompts for the zero-shot tasks (e.g., “a photo of a [LABEL]”). The indicator yij = 1 when image xi has category j as its ground-truth class, and 0 otherwise. In practice, we find this objective is effective at generating adversarial examples for zero-shot image recognition tasks.
Text-Guided Contrastive Adversarial Training. Once we have the adversarial examples, we optimize the parameters θ of the vision encoder Fθ to minimize the aforementioned objective (Equa-
tion 8) on the generated adversarial examples,
θ = argmin θ
Ls(xa, t,y). (5)
Our algorithm iteratively alternates between generating adversarial examples and updating the model via Equation 5. Since TeCoA uses additional information from the text embeddings to correct the visual features corrupted by the adversarial attacks, it helps the model to retain zero-shot transferability regarding adversarial robustness.
Contrastive Adversarial Training without Text. To validate whether the gain of zero-shot robustness comes from language supervision or is due to the formulation of the contrastive loss itself, we consider two variants of contrastive losses which do not utilize languages in our experiments. One is a contrastive adversarial training loss with one-hot labels (CoAdv.), where the label information is encoded as a one-hot vector and is used to contrast with the image features. Another is based on Jiang et al. (2020), where the model is finetuned on adversarial examples to fool the image-toimage contrastive loss, denoted as ImgCoAdv. More details are presented in Section 4.
4 EXPERIMENTS
We start by describing our experiment setups and present the experimental results of 16 datasets in Section 4.1. We observe that CLIP adapted with TeCoA achieves an average of 31 points improvement across the datasets over the original CLIP model. In Section 4.2, we provide extensive analysis over the design choices involved in our approach and identify that the use of language in TeCoA largely improves the model’s zero-shot adversarial robustness.
Datasets. We evaluate the zero-shot adversarial robustness conferred by TeCoA trained on ImageNet (Deng et al., 2009a) and report the performance of the models on the ImageNet test set as well as 15 zero-shot test datasets, covering a diverse range of recognition tasks. Specifically, we include CIFAR10, CIFAR100 (Krizhevsky et al., 2009), STL10 (Coates et al., 2011), Caltech101 (Fei-Fei et al., 2004), and Caltech256 (Griffin et al., 2007) for generic classification; OxfordPets (Parkhi et al., 2012), StanfordCars (Krause et al., 2013), Food101 (Bossard et al., 2014), Flowers102 (Nilsback & Zisserman, 2008), and FGVCAircraft (Maji et al., 2013) for fine-grained classification; SUN397 (Xiao et al., 2010) for scene recognition; and DTD (Cimpoi et al., 2014) for texture recognition. Finally, we include three datasets with domain-specialized tasks, PatchCamelyon (PCAM, lymph node tumor detection) (Veeling et al., 2018), HatefulMemes (hatespeech detection) (Kiela et al., 2020), and EuroSAT (satellite image classification) (Helber et al., 2017).
Baselines. We consider multiple variants to adapt CLIP (Radford et al., 2021). For adaptation methods, we consider visual prompting (VP) (Bahng et al., 2022), linear probing (LP) and finetuning (FT). Since LP involves training a linear readout layer on the target task, it is not zero-shot, but we still evaluate it for reference. For training losses, we consider
(1) vanilla cross-entropy loss (CE); (2) standard adversarial training loss (Adv.) with the cross-entropy loss; (3) contrastive adversarial training loss (CoAdv.); (4) contrastive adversarial training over images (ImgCoAdv.); (5) our text-guided contrastive adversarial training (TeCoA).
In our experiments, we name these variants by adaptation(loss). Detailed formulations and associated training algorithms for each loss can be found in Section A.3.
Implementation Details. We use CLIP-B/32 architecture. We optimize the model using an SGD optimizer with momentum 0.9. We train for 10 epochs. For finetuning, we use a learning rate of 1e − 5. For visual prompt tuning, we use a learning rate of 40 and a batch size of 256. For prompt tuning on the entire ImageNet dataset, we use token-level prompt with size 200, while for subsets of ImageNet (1K, 5K, and 50K images), we use token-level prompt with size 5 and a smaller batch size of 64. Unless specified, during adversarial training, we generate Linf = 1/255 bounded attacks using a 2-step PGD attack (Madry et al., 2018) with step size α = 1/255. We test the robustness of our model using 100 steps of PGD attack, with step size α = 1/255.
4.1 EXPERIMENTAL RESULTS
We first use PGD attack with 100 steps to evaluate the robustness of CLIP models that are adapted on ImageNet. In Table 1, we show the robust accuracies for the models on the Imagenet test set and 15 zero-shot recognition tasks (i.e., the training classes and test classes are non-overlapping). Each row is the robust accuracy for one method, where we compare the proposed TeCoA with VP and FT against 10 baselines and their variants. We bold the best results for each dataset (column).
From Table 1, we can see that most adapted models with adversarial training losses except for ImgCoAdv. have better robust accuracies compared to the standard cross-entropy loss. If using adversarial training, the vanilla FT (Adv.), which finetunes the entire model with adversarial training, achieves the best adversarial robustness on ImageNet (non zero-shot data, same as the training tasks) while the average robust accuracy is 10.62%, which is only slightly better than the original CLIP (6.57%). In the meantime, VP(Adv.) achieves much better results, improving the
Im ag
eN et
CIF AR
10 ST L10
CIF AR
10 0
SU N3
97 Sta nfo rdC ars Foo d1 01 Ox for dP ets Flo we r10 2 DT D Eu roS AT FG VC Ai rcr aft
Ha
tef ulM em es
Pat
chC am ely on Ca lte ch1 01 Ca lte ch2 56
0
20
40
60
80
100
Ro bu
st A
cc ur
ac y
(% )
Im ag
eN et
CIF AR
10 ST L10
CIF AR
10 0
SU N3
97 Sta nfo rdC ars Foo d1 01 Ox for dP ets Flo we r10 2 DT D Eu roS AT FG VC Ai rcr aft
Ha
tef ulM em es
Pat
chC am ely on Ca lte ch1 01 Ca lte ch2 56
0
20
40
60
80
100
Cl ea
n Ac
cu ra
cy (%
)
= 1/255 = 2/255 = 4/255
Figure 5: Zero-shot adversarial robustness under different perturbation bounds (ϵ = 1, 2, 4/255). We vary the perturbation bound for adversarial finetuning with TeCoA. Each adapted model is evaluated under attacks from the same bound seen during training. We show both the robust accuracy (left) and clean accuracy (right). Our defense is still effective on zero-shot tasks when the perturbation gets larger.
Im ag
eN et
CIF AR
10 ST L10
CIF AR
10 0
SU N3
97 Sta nfo rdC ars Foo d1 01 Ox for dP ets Flo we r10 2 DT D Eu roS AT FG VC Ai rcr aft Ha tef ulM em es Pat chC am ely on Ca lte ch1 01 Ca lte ch2 56
0
20
40
60
80
100
Ro bu
st A
cc ur
ac y
(% ) Image Prompt (Added) Token Prompt (Appended)
(a) Visual Prompt Design
Im ag
eN et
CIF AR
10 ST L10
CIF AR
10 0
SU N3
97 Sta nfo rdC ars Foo d1 01 Ox for dP ets Flo we r10 2 DT D Eu roS AT FG VC Ai rcr aft Ha tef ulM em es Pat chC am ely on Ca lte ch1 01 Ca lte ch2 56
0
20
40
60
80
100
Ro bu
st A
cc ur
ac y
(% ) Partial Finetuning Visual Prompting
(b) Partially FT vs. VP
Figure 6: (a, left) We conduct an ablation study of whether to add visual prompt to the input image or to append prompt token to the input sequence. (b, right) We optimize the same amount of parameters in partial finetuning and VP, we find VP is more effective when only a small number of parameters are optimized.
accuracy number from 6.57% to 31.84%. This indicates that visual prompting is more powerful than finetuning when coupled with the standard adversarial training.
Within each set of the variants using the same adaptation method, we compare the effectiveness of different training losses. We notice that both CoAdv. and ImgCoAdv. are much worse than the proposed TeCoA and even the standard adversarial training (Adv.). This indicates that the formulation of contrastive learning may not necessarily help improve the zero-shot adversarial robustness and the use of the language supervision might.
Overall, we find that adapting CLIP with TeCoA using model finetuning presents the best performance across the datasets, improving the accuracy number from 6.57% to 38.18%, roughly 31 points. This might look counter-intuitive as VP significantly outperforms FT under the standard adversarial training without texts. One hypothesis is that with sufficient semantic information at test time, finetuning that directly modifies the model parameters may be more expressive than just modifying the inputs given more model parameters are tuned. In Table 3 in Appendix, we show the clean accuracy of the same models, which gives similar trend as the robust accuracies. We also show results under ϵ = 1/255 and ϵ = 4/255 using AutoAttack Croce & Hein (2020) in Table 4 and Table 5, where our model is still more robust than baselines, up to an average of 36 points. We describe details in Section A.2.
4.2 ANALYSIS
Training Set Size is an important factor when adapting large-scale models. In Figure 4, we show the results of adapting CLIP with TeCoA with 1, 5, 50, and 1000 shot per category on ImageNet. We observe that increasing the amount of training data improves robust accuracy. Moreover, we also find that FT(TeCoA) outperforms the non-adapted CLIP even when the model is adapted with just one shot per class (the blue bars in Figure 4).
Effect of Attack Strength. To validate whether our method works when the adversarial perturbations become larger, we increase the perturbation bound for our TeCoA adversarial training. Figure 5 shows that, while increasing the attack strength decreases the robust accuracy, our model can still transfer robustness to zero-shot tasks.
Visual Prompt Designs. There are two ways to design visual prompts. One is to append additional tokens to image tokens and the other is to add small decoration (i.e., learnable noises) to the in-
put images. From Figure 6a, we can see appending learnable tokens in the input token sequences achieves consistently better performance than just adding the prompt value to the images.
Number of Adapted Parameters. The number of parameters that are adapted during training highly affects model performance. VP is light-weight, as it only modifies the inputs, while FT may either adjust part of the model or the entire model. In Figure 6b, we show the comparison of partially finetuning and visual prompting of CLIP. We can see that with the same amount of parameters adapted, visual prompting is more effective in adapting the large scale model for zeroshot adversarial robustness.
Are Labels Required for Adapting CLIP? Unlabeled data is rich. We investigate if it’s necessary to use the groundtruth labels from images. We generate the pseudo text labels that CLIP retrieves from clean images, where we show details in Section A.3.5. We then run our TeCoA on the pseudo labels in the same way as the labelled data. We show experimental results in Table 2, where we obtain a similar zero-shot robust accuracy even without labels.
Trading off between Robust Accuracy and Clean Accuracy. Similar to typical adversarial training, TeCoA also poses a trade-off between the clean accuracy and the robust accuracy Tsipras et al. (2019). Ideally, we want to be able to dynamically adjust this trade-off depending on the desired level of adversarial robustness. Here we can balance this trade-off by using model weights interpolation (Wortsman et al., 2022). In Figure 7, we can see there is a sweet spot in interpolating the model where we improve both the robustness and clean accuracy, marked by the star in the figure.
5 CONCLUSION
In this paper, we provided a holistic study of the zero-shot adversarial robustness problem of largescale vision-language models. We identified the effects of various adaption methods and training losses when adapting models, and conjectured that existing methods failed to generalize to new tasks due to the lack of the language supervision. We proposed a text-guided contrastive adversarial training (TeCoA) which can be used with model finetuning and visual prompting to drastically improve the zero-shot adversarial robustness of CLIP. Extensive experimental evaluation showed the effectiveness of TeCoA, and the detailed analyses provide useful lessons for adapting large-scale models to improve their zero-shot adversarial robustness, shedding light on this important problem.
6 ACKNOWLEDGEMENT
This research is based on work partially supported by the DARPA SAIL-ON program, the DARPA MCS program, the NSF NRI Award #2132519, a GE/DARPA grant, a CAIT grant, and gifts from JP Morgan, DiDi, and Accenture.
Generative_Dual_Adversarial_Network_for_Generalized_Zero-Shot_
Learning_CVPR_2019_paper.html.
Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In International Conference on Machine Learning, pp. 4904–4916. PMLR, 2021.
Menglin Jia, Luming Tang, Bor-Chun Chen, Claire Cardie, Serge Belongie, Bharath Hariharan, and Ser-Nam Lim. Visual prompt tuning. arXiv preprint arXiv:2203.12119, 2022.
Ziyu Jiang, Tianlong Chen, Ting Chen, and Zhangyang Wang. Robust pre-training by adversarial contrastive learning. Advances in Neural Information Processing Systems, 33:16199–16210, 2020.
Douwe Kiela, Hamed Firooz, Aravind Mohan, Vedanuj Goswami, Amanpreet Singh, Pratik Ringshia, and Davide Testuggine. The hateful memes challenge: Detecting hate speech in multimodal memes. Advances in Neural Information Processing Systems, 33:2611–2624, 2020.
Minseon Kim, Jihoon Tack, and Sung Ju Hwang. Adversarial self-supervised contrastive learning. Advances in Neural Information Processing Systems, 33:2983–2994, 2020.
Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In 4th International IEEE Workshop on 3D Representation and Recognition (3dRR-13), Sydney, Australia, 2013.
Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. Adversarial examples in the physical world. CoRR, abs/1607.02533, 2017.
Christoph H. Lampert, Hannes Nickisch, and Stefan Harmeling. Learning to detect unseen object classes by between-class attribute transfer. In 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2009), 20-25 June 2009, Miami, Florida, USA, pp. 951–958. IEEE Computer Society, 2009. doi: 10.1109/CVPR.2009.5206594. URL https://doi.org/10.1109/CVPR.2009.5206594.
Matthew Lawhon, Chengzhi Mao, and Junfeng Yang. Using multiple self-supervised tasks improves model robustness. arXiv preprint arXiv:2204.03714, 2022.
Bo Liu, Qiulei Dong, and Zhanyi Hu. Zero-shot learning from adversarial feature residual to compact visual feature. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pp. 11547–11554. AAAI Press, 2020. URL https://ojs.aaai.org/index.php/AAAI/article/view/6821.
Ruoshi Liu, Sachit Menon, Chengzhi Mao, Dennis Park, Simon Stent, and Carl Vondrick. Shape analysis by shadow synthesis.
Ruoshi Liu, Chengzhi Mao, Purva Tendulkar, Hao Wang, and Carl Vondrick. Landscape learning for neural network inversion. arXiv e-prints, pp. arXiv–2206, 2022.
Yang Liu, Jishun Guo, Deng Cai, and Xiaofei He. Attribute attention for semantic disambiguation in zero-shot learning. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019, pp. 6697–6706. IEEE, 2019. doi: 10.1109/ICCV.2019.00680. URL https://doi.org/10.1109/ICCV.2019.00680.
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In ICLR, 2018.
S. Maji, J. Kannala, E. Rahtu, M. Blaschko, and A. Vedaldi. Fine-grained visual classification of aircraft. Technical report, 2013.
Chengzhi Mao, Ziyuan Zhong, Junfeng Yang, Carl Vondrick, and Baishakhi Ray. Metric learning for adversarial robustness. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
Chengzhi Mao, Amogh Gupta, Vikram Nitin, Baishakhi Ray, Shuran Song, Junfeng Yang, and Carl Vondrick. Multitask learning strengthens adversarial robustness. In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm (eds.), Computer Vision – ECCV 2020, pp. 158– 174, Cham, 2020. Springer International Publishing.
Chengzhi Mao, Mia Chiquier, Hao Wang, Junfeng Yang, and Carl Vondrick. Adversarial attacks are reversible with natural supervision. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 661–671, 2021.
Chengzhi Mao, Lingyu Zhang, Abhishek Joshi, Junfeng Yang, Hao Wang, and Carl Vondrick. Robust perception through equivariance, 2022. URL https://arxiv.org/abs/2212. 06079.
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks, 2016.
Jian Ni, Shanghang Zhang, and Haiyong Xie. Dual adversarial semantics-consistent network for generalized zero-shot learning. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc, Emily B. Fox, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pp. 6143–6154, 2019. URL https://proceedings.neurips.cc/paper/2019/hash/ c46482dd5d39742f0bfd417b492d0e8e-Abstract.html.
Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. In 2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing, pp. 722–729. IEEE, 2008.
Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018.
Mark Palatucci, Dean Pomerleau, Geoffrey E Hinton, and Tom M Mitchell. Zero-shot learning with semantic output codes. In Y. Bengio, D. Schuurmans, J. Lafferty, C. Williams, and A. Culotta (eds.), Advances in Neural Information Processing Systems, volume 22. Curran Associates, Inc., 2009. URL https://proceedings.neurips.cc/paper/2009/file/ 1543843a4723ed2ab08e18053ae6dc5b-Paper.pdf.
Tianyu Pang, Xiao Yang, Yinpeng Dong, Hang Su, and Jun Zhu. Bag of tricks for adversarial training, 2020.
Nicolas Papernot, Patrick D. McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Ananthram Swami. The limitations of deep learning in adversarial settings. arXiv:1511.07528, 2015.
Omkar M Parkhi, Andrea Vedaldi, Andrew Zisserman, and CV Jawahar. Cats and dogs. In 2012 IEEE conference on computer vision and pattern recognition, pp. 3498–3505. IEEE, 2012.
Anibal Pedraza, Oscar Deniz, and Gloria Bueno. On the relationship between generalization and robustness to adversarial examples. Symmetry, 13(5):817, 2021.
Hieu Pham, Zihang Dai, Golnaz Ghiasi, Hanxiao Liu, Adams Wei Yu, Minh-Thang Luong, Mingxing Tan, and Quoc V Le. Combined scaling for zero-shot transfer learning. arXiv preprint arXiv:2111.10050, 2021.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pp. 8748–8763. PMLR, 2021.
Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical textconditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 2022.
Leslie Rice, Eric Wong, and J. Zico Kolter. Overfitting in adversarially robust deep learning, 2020.
Bernardino Romera-Paredes and Philip H. S. Torr. An embarrassingly simple approach to zeroshot learning. In Francis R. Bach and David M. Blei (eds.), Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, volume 37 of JMLR Workshop and Conference Proceedings, pp. 2152–2161. JMLR.org, 2015. URL http://proceedings.mlr.press/v37/romera-paredes15.html.
Hadi Salman, Andrew Ilyas, Logan Engstrom, Ashish Kapoor, and Aleksander Madry. Do adversarially robust imagenet models transfer better? Advances in Neural Information Processing Systems, 33:3533–3545, 2020.
Mark Sandler, Andrey Zhmoginov, Max Vladymyrov, and Andrew Jackson. Fine-tuning image transformers using learnable memory. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12155–12164, 2022.
Edgar Schonfeld, Sayna Ebrahimi, Samarth Sinha, Trevor Darrell, and Zeynep Akata. Generalized zero-and few-shot learning via aligned variational autoencoders. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8247–8255, 2019.
Manli Shu, Weili Nie, De-An Huang, Zhiding Yu, Tom Goldstein, Anima Anandkumar, and Chaowei Xiao. Test-time prompt tuning for zero-shot generalization in vision-language models, 2022. URL https://arxiv.org/abs/2209.07511.
David Stutz, Matthias Hein, and Bernt Schiele. Disentangling adversarial robustness and generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
Dong Su, Huan Zhang, Hongge Chen, Jinfeng Yi, Pin-Yu Chen, and Yupeng Gao. Is robustness the cost of accuracy?–a comprehensive study on the robustness of 18 deep image classification models. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 631–648, 2018.
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv:1312.6199, 2013.
Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. Robustness may be at odds with accuracy, 2019.
Jonathan Uesato, Jean-Baptiste Alayrac, Po-Sen Huang, Robert Stanforth, Alhussein Fawzi, and Pushmeet Kohli. Are labels required for improving adversarial robustness? CoRR, 2019.
Pratik Vaishnavi, Kevin Eykholt, and Amir Rahmati. Transferring adversarial robustness through robust representation matching. arXiv preprint arXiv:2202.09994, 2022.
Bastiaan S Veeling, Jasper Linmans, Jim Winkens, Taco Cohen, and Max Welling. Rotation equivariant cnns for digital pathology. In International Conference on Medical image computing and computer-assisted intervention, pp. 210–218. Springer, 2018.
Vinay Kumar Verma, Dhanajit Brahma, and Piyush Rai. A meta-learning framework for generalized zero-shot learning. CoRR, abs/1909.04344, 2019. URL http://arxiv.org/abs/1909. 04344.
Hao Wang, Chengzhi Mao, Hao He, Mingmin Zhao, Tommi S Jaakkola, and Dina Katabi. Bidirectional inference networks: A class of deep bayesian networks for health profiling. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp. 766–773, 2019.
Mitchell Wortsman, Gabriel Ilharco, Jong Wook Kim, Mike Li, Simon Kornblith, Rebecca Roelofs, Raphael Gontijo Lopes, Hannaneh Hajishirzi, Ali Farhadi, Hongseok Namkoong, et al. Robust fine-tuning of zero-shot models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7959–7971, 2022.
Yongqin Xian, Tobias Lorenz, Bernt Schiele, and Zeynep Akata. Feature generating networks for zero-shot learning. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pp. 5542– 5551. Computer Vision Foundation / IEEE Computer Society, 2018. doi: 10.1109/CVPR. 2018.00581. URL http://openaccess.thecvf.com/content_cvpr_2018/html/ Xian_Feature_Generating_Networks_CVPR_2018_paper.html.
Jianxiong Xiao, James Hays, Krista A Ehinger, Aude Oliva, and Antonio Torralba. Sun database: Large-scale scene recognition from abbey to zoo. In 2010 IEEE computer society conference on computer vision and pattern recognition, pp. 3485–3492. IEEE, 2010.
Guo-Sen Xie, Li Liu, Xiaobo Jin, Fan Zhu, Zheng Zhang, Jie Qin, Yazhou Yao, and Ling Shao. Attentive region embedding network for zero-shot learning. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pp. 9384–9393. Computer Vision Foundation / IEEE, 2019. doi: 10.1109/CVPR.2019.00961. URL http://openaccess.thecvf.com/content_CVPR_2019/html/Xie_ Attentive_Region_Embedding_Network_for_Zero-Shot_Learning_CVPR_ 2019_paper.html.
Yunlong Yu, Zhong Ji, Yanwei Fu, Jichang Guo, Yanwei Pang, and Zhongfei (Mark) Zhang. Stacked semantics-guided attention model for fine-grained zero-shot learning. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pp. 5998–6007, 2018. URL https://proceedings.neurips.cc/paper/2018/hash/ 9087b0efc7c7acd1ef7e153678809c77-Abstract.html.
Mehmet Kerim Yucel, Ramazan Gokberk Cinbis, and Pinar Duygulu. A deep dive into adversarial robustness in zero-shot learning. In European Conference on Computer Vision, pp. 3–21. Springer, 2020.
Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P. Xing, Laurent El Ghaoui, and Michael I. Jordan. Theoretically principled trade-off between robustness and accuracy. arXiv abs/1901.08573, 2019.
Chunting Zhou, Junxian He, Xuezhe Ma, Taylor Berg-Kirkpatrick, and Graham Neubig. Prompt consistency for zero-shot task generalization. arXiv preprint arXiv:2205.00049, 2022a.
Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Conditional prompt learning for vision-language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16816–16825, 2022b.
Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Learning to prompt for visionlanguage models. International Journal of Computer Vision, 130(9):2337–2348, 2022c.
A APPENDIX
A.1 EXPERIMENTS
A.1.1 ZERO-SHOT CLEAN ACCURACY OF OUR ADAPTED MODEL
We show the results for accuracy on clean images in Table 3.
A.2 AUTOATTACK EXPERIMENT
We also consider a stronger attack AutoAttack Croce & Hein (2020) in our evaluation. Since our method uses adversarial training and does not rely on the obfuscated gradient, we use two APGD variants, APGD-CE and APGD-DLR, in AutoAttack to evaluate. We show robust accuracy under perturbation bound ϵ = 1/255 in Table 4 and robust accuracy under perturbation bound ϵ = 4/255 in Table 5. For both perturbation bounds, our method achieves higher robust accuracy than vanilla CLIP by up to 36 points on average.
Notably, on ϵ = 1/255, even evaluated under the stronger AutoAttack, the robustness accuracy of FT (TeCoA) is 37.02, which is still higher than all other baselines methods from Table 1 in robust accuracy, even though those baselines are evaluated under a weaker PGD100 attack. A larger perturbation bound ϵ = 4/255 makes the attack stronger, where our method still improves robustness by an average of 9 points. In addition, while the AutoAttack significantly reduces the robust accuracy of CLIP from 6.57 to 0.53, it only slightly decreases our TeCoA’s robust accuracy: 2.1 points for visual prompt tuning and 1.16 for finetuning model (see Table 1 and Table 4).
One reason for AutoAttack to be so effective in attacking vanilla CLIP than PGD100 is because it uses a fractional attack vector, which is not rounded by 1/255 during the inference. Images are often encoded via integer from 0 to 255, which allows only attack at the integer level. In the main paper, we use PGD attacks with step size 1 (if the image ranges from 0 to 1, then step size is proportionally 1/255) for 100 steps. Since there is no fractional attack value, the attack space is constrained and less effective. This is because the standard image inputs have value resolutions that should be larger than 0.5, and any values smaller than this would be rounded when encoding the images. If we ignore the fact that images will be encoded in integers between 0 to 255, then we can have stronger attacks by exploring the fraction values. Since the AutoAttack automatically reduces the attack step size when loss oscillates, it explores the fraction space and is more effective in the attack.
A.3 TRAINING LOSSES AND ALGORITHMS
We give formulations for the different training algorithms considered in our experiments. Throughout this section, let Fθ denote an image encoder parameterized by θ, and let T denote a frozen text encoder. Let D denote a dataset containing pairs (x, y) of images and their respective one-hot labels.
A.3.1 STANDARD ADVERSARIAL TRAINING WITH CROSS-ENTROPY LOSS (ADV.)
The standard adversarial training paradigm. We initialize a learnable linear layer Cϕ, and append it to Fθ. The classification loss is L(Cϕ(Fθ(xa)),y) defined in cross-entropy loss. We first train Cϕ on standard images. Then, given a natural image x with one-hot label y, we generate an attacked image xa by maximizing the loss L(Cϕ(Fθ(xa)),y). We then update θ to minimize the loss L(Cϕ(Fθ(xa)),y). We describe our algorithm in Algorithm 1.
Algorithm 1 Standard Adversarial Training (Adv.) Input: Dataset D, learnable parameter θ, model F , parameter of projector Cϕ
for all iter ∈ preset number of training epochs do for all x,y ∈ minibatch do
xa = argmaxxa L(Cϕ(Fθ(x a)),y) ▷ Generating adversarial attacks
θ = θ −∇θL(Cϕ(Fθ(xa)),y) ▷ Training on generated adversarial examples end for
end for
A.3.2 CONTRASTIVE ADVERSARIAL TRAINING LOSS (COADV.)
We study how much the contrastive learning objective contributes to the zero-shot robustness gain. Instead of using one-hot label y and cross-entropy loss in our objective, we create a dictionary of embeddings E by random initialization, where each embedding ei denotes the code representation for the category yi. We optimize the following contrastive learning loss:
Ls(x, E ,y) = −Ei,j [ yij log exp(cos(z (I) i , ej)/τ)∑
k exp(cos(z (I) i , ej)/τ)
] , (6)
where the z(I)i = Fθ(xi) are the features of the input image, and ej are the code representation from the dictionary. We use yij to indicate which image-code pairs are positive and which are negative; this indicator satisfies yij = 1 if and only if the examples i = j, and 0 otherwise. τ is a scalar hyper-parameter, and cos denotes the cosine similarity function. We describe our algorithm in Algorithm 2.
Algorithm 2 Contrastive Adversarial Training Loss (CoAdv.) Input: Dataset D, learnable parameter θ, model F
for all iter ∈ preset number of training epochs do for all x,y ∈ minibatch do
xa = argmaxxa Ls(x, E ,y) ▷ Generating adversarial attacks for contrastive loss θ = θ −∇θLs(x, E ,y) ▷ Contrastive learning on generated adversarial examples
end for end for
A.3.3 CONTRASTIVE ADVERSARIAL TRAINING OVER IMAGES (IMGCOADV.)
Prior work Jiang et al. (2020) uses image-only contrastive adversarial learning to obtain robustness. We adapt this method as a baseline to study whether using the knowledge from only images — not language — can achieve zero-shot robustness. For each image xi, we create a transformationed xj , and form the image pair (xi, xj).
We use the same visual encoder to embed the images xi and xj to obtain the features zi and zj . We then construct the following contrastive learning loss:
Ls(xi,xj ,y) = −Ei,j [ yij log
exp(cos(zi, zj)/τ)∑ k exp(cos(zi, zj)/τ)
] , (7)
where the zi = Fθ(xi) are the features of the input image. We use yij to indicate which image pairs are positive and which are negative; this indicator satisfies yij = 1 if and only if the images xi and xj are augmented from the same instance, and 0 otherwise. τ is a scalar hyper-parameter, and cos denotes the cosine similarity function.
Let zai = Fθ(x a i ), where x a i denotes the generated adversarial examples. Then we can obtain the adversarial examples via:
xai = argmax xai Ls(xi,xj ,y) = argmax xai
−Ei,j [ yij log
exp(cos(zai , zj)/τ)∑ k exp(cos(z a i , zj)/τ)
] , (8)
Once we generate the adversarial images, we conduct contrastive learning on adversarial images and the paired clean images using Equation 7.
We introduce our algorithm in Algorithm 3.
A.3.4 TEXT-GUIDED CONTRASTIVE ADVERSARIAL TRAINING (TECOA)
We describe the TeCoA training algorithm in Algorithm 4. We denote the learnable parameters to be θ. For the visual prompt tuning, θ is only the prompt vector. For the finetuning method, θ is the parameter of the whole model.
Algorithm 3 Contrastive Adversarial Training over Images (ImgCoAdv.) Input: Dataset D, learnable parameter θ, model F
for all iter ∈ preset number of training epochs do for all x ∈ minibatch do
xai = argmaxxai Ls(x a i ,xj ,y) ▷ Generating adversarial attacks for contrastive loss
θ = θ −∇θLs(xai ,xj ,y) ▷ Contrastive learning on generated adversarial examples end for
end for
Algorithm 4 TeCoA Training Input: Dataset D, learnable parameter θ, model F , text t
for all iter ∈ preset number of training epochs do for all x,y ∈ minibatch do
xa = argmaxx Ls(x, t,y) ▷ Generating adversarial attacks for contrastive loss θ = θ −∇θLs(xa, t,y) ▷ Contrastive learning on generated adversarial examples
end for end for
A.3.5 TECOA LEARNING ON UNLABELED DATA
Given an unlabeled image, we first provide a list of text using the possible category names:
A photo of a {Category Name}.
Since the unlabeled images are not attacked, CLIP can retrieve the nearest text embedding from the image embedding and use the text as the pseudo label for the image. We then conduct the TeCoA training on the images and their pseudo text label. We describe the algorithm below in Algorithm 5.
Algorithm 5 TeCoA Training on Unlabeled Data Input: Dataset D without label, learnable parameter θ, model F , text t.
for all iter ∈ preset number of training epochs do for all x ∈ minibatch B = {x1, . . . , xm} do
y = argminy Ls(x, t,y) ▷ Finding pseudo label for the clean images using CLIP xa = argmaxx Ls(x, t,y) ▷ Generating adversarial attacks for contrastive loss θ = θ −∇θLs(xa, t,y) ▷ Contrastive learning on generated adversarial examples
end for end for
A.4 DISCUSSION FOR RESULTS
In Table 1, our TeCoA performs better than existing methods except for LP(CE) and VPT(Adv.) on HateMemes and PCAM datasets. This is because the HateMemes dataset is a binary classification task for detecting hateful speech, which is a very different domain from the ImageNet classification. Since both LP(CE) and VPT (adv) only adapt a small number of parameters on the ImageNet set, the resulting model may overfit less to the Image recognition task, and just perform random guessing. Note that the 54% accuracy is close to random guessing 50%. In addition, PCAM is a binary classification for lymph nodes (medical image, https://github.com/basveeling/pcam), which is also a very different domain from ImageNet. Similar to HateMemes, adapting fewer parameters makes the model learn less and overfit less, where the 52.5% accuracy is close to random guessing 50%. Thus, both datasets remain a big challenge for all existing zero-shot robust classification tasks.
A.5 DISCUSSION FOR TECOA LOSS
In the main paper, we interpret our loss through the image-text contrastive objective, which first conducts a matrix multiplication between the image embedding and language embedding, and then applies a cross-entropy loss on the output. Since the text embedding is fixed, this embedding can be treated as a layer of linear classifier, whose weights are obtained from the language embedding. This image-text contrastive objective can also be interpreted as using cross-entropy loss on a fixed readout layer that is initialized with the right language knowledge. This further validates the importance of language information for zero-shot adversarial robustness.
A.6 ADAPTATION METHOD FORMULATION
Token-Level Visual Prompts. Token-level visual prompts adapt transformer-based models by appending tokens to the input token sequence. This is the most effective prompt method in our experiments, where we use this by default unless specified. Our visual prompts append additional tokens Pk to the input sequence x of the vision transformer:
x = [x;P0, P1, ..., Pk] (9)
The remaining transformer parameters and computations are kept the same as the original.
Image-Level Visual Prompts. Image-level visual prompts adapt transformer models by adding prompt to the input pixels. This is less effective way of adding prompt, as discussed in Figure 6a. Let the prompt token be P , and input image be x the visual prompt is added to the input image:
x = x+ P (10) The remaining transformer parameters and computations are kept the same as the original.
Finetuning. This is the standard way of adapting a model, where all parameters of the model is updated with relatively small learning rate. In our experiments, we find learning rate of 1e − 5 achieves the best performance. | 1. What is the focus of the paper regarding zero-shot classification and adversarial robustness?
2. What are the strengths and weaknesses of the proposed TeCoA approach?
3. Are there any concerns or questions about the experiments and their implementation?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any suggestions for improving the paper, such as using stronger attacks for robustness evaluation or providing more explanations for abnormal results? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper explores the problem of adapting large-scale pre-trained models for adversarially robust zero-shot classification. It is found that vanilla adversarial training on a single task may reduce the zero-shot capability of the pre-trained model. To improve the zero-shot adversarial robustness, a text-guided contrastive adversarial training (TeCoA) is proposed, which aligns the image embeddings of adversarial examples and the text embeddings of the standard prompts for zero-shot predictions. Experiments validate the effectiveness of TeCoA.
Strengths And Weaknesses
[Strength]
The problem of zero-shot adversarial robustness is important in practice and has not been well explored.
Some of the results are interesting and may inspire future works on this problem, e.g., Figure 1 and Table 2.
[Weaknesses]
Technical novelty. The proposed TeCoA is nearly identical to: (1) first construct a zero-shot linear classifier head with the pre-defined text prompts for zero-shot classification, as done in (Wortsman et al., 2022), and then (2) perform vanilla adversarial training (using CE loss) with the classification head frozen. Hence the main technical contribution may be that the initialization (and/or freezing) of the classification head is important for adversarial fine-tuning.
Missing important experimental details. The missing implementation details include model architecture, training epochs and which visual prompt design is used. Besides, the generation of pseudo text labels for images (for results in Table 2) needs further explanation.
Robustness evaluation. PGD-100 may not be a reliable attack for robustness evaluation. It would be better to consider stronger attacks like AutoAttack (or at least the two APGD attacks used in AutoAttack) [1]. Besides, the main results in Table 1 are based on eps=1/255, which may be too small. Using larger eps like 4/255 as in [1] may be more convincing.
[1] Croce, Francesco, and Matthias Hein. "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks." International conference on machine learning. PMLR, 2020.
Explanation of abnormal results. As shown in Table 1, while FT (TeCoA) is the best on most datasets, LP (CE) and VPT (adv.) outperform other methods by a large margin on HateMemes and PACM. Is there any possible explanation?
Table 1 and Table 3 are inconsistent. Especially, there are two rows starting with "+VP (TeCoA)", which is confusing.
Clarity, Quality, Novelty And Reproducibility
The paper is readable, but some important details are missing. The novelty is limited as discussed in the Weaknesses. The reproducibility may be questionable due to some missing experimental details and that code is not provided. |
ICLR | Title
Understanding Zero-shot Adversarial Robustness for Large-Scale Models
Abstract
Pretrained large-scale vision-language models like CLIP have exhibited strong generalization over unseen tasks. Yet imperceptible adversarial perturbations can significantly reduce CLIP’s performance on new tasks. In this work, we identify and explore the problem of adapting large-scale models for zero-shot adversarial robustness. We first identify two key factors during model adaption—training losses and adaptation methods—that affect the model’s zero-shot adversarial robustness. We then propose a text-guided contrastive adversarial training loss, which aligns the text embeddings and the adversarial visual features with contrastive learning on a small set of training data. We apply this training loss to two adaption methods, model finetuning and visual prompt tuning. We find that visual prompt tuning is more effective in the absence of texts, while finetuning wins in the existence of text guidance. Overall, our approach significantly improves the zero-shot adversarial robustness over CLIP, seeing an average improvement of 31 points over ImageNet and 15 zero-shot datasets. Our code and model is available at github.com/cvlab-columbia/ZSRobust4FoundationModel.
1 INTRODUCTION
Large-scale models trained on vision and language data—also known as foundation models— have emerged as a universal backbone for tackling many recognition problems in computer vision (Jia et al., 2021; Radford et al., 2021), graphics (Ramesh et al., 2022) and robotics (Ahn et al., 2022). One of the key advantages of foundation models is zero-shot generalization, where the models use just a single textual description to recognize new visual categories with high accuracy. Since those large-scale models are powerful, they will continue to be used in many critical applications, where it is important to make them reliable. However, robustness under adversarial examples remains a challenge, where an imperceptible pattern can be combined with the image to cause recognition failures (Croce & Hein, 2020; Carlini & Wagner, 2017; Dong et al., 2018; Szegedy et al., 2013; Moosavi-Dezfooli et al., 2016), where attack on foundation models can consequently corrupt the downstream applications.
Due to the importance of this problem, there is a large literature that investigates adversarial robustness for neural networks. The most common approach for adversarial defense is to learn the model through adversarial training (Madry et al., 2018; Mao et al., 2019; Szegedy et al., 2013; Pang et al., 2020; Rice et al., 2020; Uesato et al., 2019), which involves augmenting the training set with mined adversarial examples that fool the image classifier. Adversarial training has been validated to improve robustness on the task that the mined examples come from, but it often comes at a cost of generalization (Stutz et al., 2019; Su et al., 2018; Pedraza et al., 2021). However, our world is vast and naturally open, and only evaluating adversarial robustness on the learned tasks is limited. Can we achieve zero-shot transferability for adversarial robustness, even if the model has never been trained on the unknown tasks?
In this paper, we study this important yet under-explored problem, zero-shot adversarial robustness of large-scale vision-language models. We start our investigation with the state-of-the-art CLIP model (Radford et al., 2021), which has been shown to be effective in zero-shot recognition tasks. We find that simply adding an imperceptible vector to the image (≤ 1/255) can subvert
∗Equal contribution
CLIP’s prediction (see Figure 1a). If we follow the standard adversarial training defense paradigm (Madry et al., 2018; Rice et al., 2020) to finetune CLIP on the ImageNet (Deng et al., 2009b) training set, we observe that the adapted CLIP has improved adversarial robustness on the ImageNet validation set, but comes at the cost of significantly reduced accuracy on unseen datasets and classes (Figure 1b). Standard adversarial training backfires on CLIP as it fails to retain the model’s zero-shot generalization ability.
Adaptation methods and training objectives are the two major factors for adapting a large-scale model. First, besides finetuning the whole model, we seek an alternative adaptation method—visual prompt tuning—which adapts the inputs instead of the parameters of the model. Visual prompt tuning (VPT) is an emerging light-weight adaptation method (Bar et al., 2022; Bahng et al., 2022) that learns a visual prompt which is added to the input image, where we use visual prompt to instruct the model to be robust against adversaries. Second, we find that the standard adversarial training objective ignores the visual-language alignment in CLIP’s pretrained representation space, causing the model to lose zero-shot capability. We then propose a text-guided contrastive adversarial training (TeCoA) loss, dubbed as Tekoa (tee·kow), which maximizes the similarity of the adversarial visual features and the correct text embeddings with contrastive learning. Since the adapted visual features continue to align well with the text features, the model adapted with TeCoA can maximally retain the original zero-shot generalization of CLIP while enjoying improved adversarial robustness.
We conduct an extensive evaluation on 15 zero-shot image datasets, offering a holistic study of the zero-shot adversarial robustness problem. This is especially important given that large-scale vision models are emerging as infrastructure and are deploying to critical applications. We find that the lightweight VPT is noticeably more effective than model finetuning when textual information is unavailable. When texts are used during adaptation, both VPT and finetuning using our TeCoA loss have drastically improved zero-shot adversarial robustness compared to baselines. Finetuning has higher gains than VPT as more parameters are tuned. Our best performing model with the TeCoA loss can improve adversarial robustness over CLIP by an average of 31% across the datasets. Our method also works on unlabeled images, allowing for better robustness with a large amount of unlabeled data. Our work establish a new and important benchmarket, zero-shot adversarial robustness, for future work to evaluate on. We release all models and code.
2 RELATED WORK
Zero-Shot Generalization aims to classify novel classes and tasks that are unseen during training (Palatucci et al., 2009; Lampert et al., 2009; Radford et al., 2021). Existing zero-shot methods often project visual features into semantic feature space (Frome et al., 2013; Akata et al., 2015; Romera-Paredes & Torr, 2015; Xie et al., 2019; Yu et al., 2018; Liu et al., 2019), or use generative methods to generate fake visual features of unseen classes from their semantic descriptions to train classifiers (Xian et al., 2018; Ni et al., 2019; Huang et al., 2019; Schonfeld et al., 2019; Verma et al., 2019; Liu et al., 2020). Recently, large-scale pretrained vision-language models (Radford et al., 2021; Jia et al., 2021) have shown outstanding zero-shot generalization ability on unseen tasks via text prompt engineering. Their adversarial robustness and its transferability, however, has not been studied in the zero-shot setting.
Adversarial Robustness. Adversarial attacks for image recognition find an additive vector on the input to maximize the cross-entropy loss, which is calculated from the model prediction and the ground truth one-hot label (Szegedy et al. (2013); Athalye et al. (2018); Carlini & Wagner (2017);
Kurakin et al. (2017); Papernot et al. (2015); Moosavi-Dezfooli et al. (2016)). Adversarial training (Madry et al., 2018) and its variants (Zhang et al., 2019; Rice et al., 2020), which train the model on generated adversarial examples, are effective in improving adversarial robustness on the task that they have been trained on. However, it is unclear whether this approach can improve robustness in the zero-shot scenario. To the best of our knowledge, Yucel et al. (2020) is so far the only work that studies the adversarial robustness of zero-shot learning models. Their setting is limited because it relies on predicting robust attributes, which may not be easily available for many tasks.
Transferability of Robustness. Mao et al. (2020) shows that adversarial robustness is transferable across tasks when multiple tasks are trained together. Salman et al. (2020) shows that adversarially robust models transfer better than their standard-trained counterparts when they are finetuned on other tasks. Chan et al. (2020) finds that matching a robust model’s gradient can transfer robustness to a new model. Vaishnavi et al. (2022) proposes a low-cost method to transfer robustness to a new model on the same task with different architecture. However, the transferability of adversarial robustness to zero-shot tasks has not been investigated.
Contrastive Learning (Oord et al., 2018) has been used to train large-scale image-language models (Jia et al., 2021; Radford et al., 2021). Kim et al. (2020); Jiang et al. (2020) propose instancewise adversarial perturbation and use a contrastive loss to align the features of clean examples and generated adversarial examples. Our method is the first one to introduce a cross-modal image-text contrastive loss in adversarial contrastive learning.
Adapting Pretrained Models. Linear probing and finetuning are the two major ways to adapt deep pretrained models. Recently, a more lightweight adaptation method, prompt tuning, has been proposed (Zhou et al., 2022c;b;a). Shu et al. (2022) shows that test-time optimization for text prompting helps generalization. Bar et al. (2022) combines target task and image inpainting to achieve zeroshot task inference. Jia et al. (2022); Sandler et al. (2022) used visual prompting to replace the finetuning procedure for large-scale models. Bahng et al. (2022) optimizes a visual prompt to increase the performance on the same task that the model finetunes the visual prompt on. Mao et al. (2021); Lawhon et al. (2022); Mao et al. (2022) find input prompt with self-supervised objective for robustness. Prompt tuning for continuous learning has also been proposed (Conder et al., 2022). Liu et al. (2022) proposes an amortized approach to use fewer iterations to adapt models. Wang et al. (2019) showed that it is useful in improving the performance for healthcare, and Liu et al. showed it can be used to adapt generative models for solving under constrained problems. However, adapting large-scale models for transferable zero-shot robustness, using methods such as finetuning or visual prompting, has not been investigated.
3 MODEL ADAPTATION FOR ZERO-SHOT ADVERSARIAL ROBUSTNESS
We first give background in Section 3.1 on adversarial attacks, adversarial training, and the problem setup. In Section 3.2, we discuss adaptation methods for adapting large-scale models for zero-shot adversarial robustness. Finally, Section 3.3, we discuss the effect of different training losses to motivate and then introduce our proposed text-guided contrastive adversarial training (TeCoA) loss.
3.1 BACKGROUND AND PROBLEM SETUP
Let Fθ(·) be a deep model for image recognition parameterized by θ. Given an input image x, the model produces a representation ŷ = Fθ(x). For image classification, a standard model learns to minimize the cross-entropy loss, L(x,y) = H(Fθ(x),y), where the label y is often represented by a one-hot vector.
Adversarial Attacks. An adversary (i.e., attacker) typically optimizes for an additive transformation δ to the image, xa = x+ δ, which can fool the model Fθ to make incorrect predictions:
xa = argmax xa
L(xa,y), s.t. ||xa − x||q ≤ ϵ. (1)
Here, the magnitude of the added pattern is bounded by a q-norm ball of radius ϵ, making the attack perceptually invisible. The role of a defender is to find ways to correct the influence of the attacks and ensure that the model is robust to the adversarial perturbations.
Zero-Shot Adversarial Robustness. Large-scale vision-language models generalize well to new tasks and datasets at test time. However, the zero-shot transferability of adversarial robustness of these models is less explored. In our zero-shot adversarial robustness setup, we study the worst case: we assume that the attacker has the significant advantage of unrestricted access to the ground truth of new tasks at test time, while the defender has no access. While the attacker can directly optimize for an attacked image that fools the model, the defender needs to be robust on all kinds of unseen data and tasks. Compared to the commonly-used robustness setting, which only evaluates robustness on the trained task, our setup is more challenging.
Adversarial Training is the common strategy to improve a model’s adversarial robustness. By retraining the model on mined adversarial examples, the model is incentivized to learn features invariant under adversarial transformations. The defender often optimizes for the objective
θ = argmin θ
L(Fθ(xa),y), (2)
so that the model Fθ still makes correct predictions on the generated adversarial examples. As shown in Figure 1, the vanilla adversarial training approach is effective on seen tasks but fails when the model is evaluated on attacked data from unseen tasks.
3.2 ADAPTING THE LARGE-SCALE MODELS
One of the key advantages of large-scale pretrained models is that the features of these models are generalizable to various tasks without full retraining. Thus, we would like to make a lightweight adjustment to the models with a small set of training data of attacked images on known tasks, and maximally retain the zero-shot generalization ability of these large models while improving their adversarial robustness on new tasks. We adopt CLIP, one of the best performing vision-language models for zero-shot recognition, as our base model, and investigate a few adaptation strategies as shown in Figure 2 to improve the zero-shot adversarial robustness of CLIP.
Finetuning (FT). A typical way to adapt a pretrained model, finetuning, is to update the parameters of the model either entirely or partially. While this approach improves model robustness on the target distribution, Radford et al. (2021) and Pham et al. (2021) show that this improvement often comes at a cost of generalization; directly modifying model parameters may lead to a higher tendency towards overfitting.
Visual Prompting (VP). Recently, visual prompt tuning has emerged as a lightweight approach for adapting pretrained large-scale models. Originating from the natural language processing community, the key idea of prompt tuning is to identify prompts to decorate the inputs that effectively query the pretrained models. The computer vision community recently adopted this idea and is starting to explore approaches that can modify input images in pixel space to better adapt large-scale vision
models for downstream tasks. For transformer-based models, visual prompting methods either learn a token that is appended to the input token sequences (Bar et al., 2022) or learn a direct modification to the input images (Bahng et al., 2022) for adaptation, as shown by (d,e) in Figure 2.
In our study, we conduct an extensive analysis over both adaptation methods to better understand adapting large-scale models for zero-shot adversarial robustness.
3.3 TEXT-GUIDED ADVERSARIAL CONTRASTIVE ADVERSARIAL TRAINING
Through our initial experiments, we conjecture that the training objective might play a key role in improving the zero-shot adversarial robustness of the models. Radford et al. (2021) indicates that the zero-shot generalization ability of large-scale vision-language models may come from their language supervision. For example, CLIP learns a joint visual-and-text feature space, which helps zero-shot generalization at test time. If we simply finetune the visual encoder with one-hot labels, it may break the joint feature space and harm this zero-shot generalization ability. These observations motivate us to consider using the text information when generating the adversarial examples and also in the training objective during model adaptation.
We now introduce the Text-guided Contrastive Adversarial (TeCoA) training loss, to effectively incorporate text information. In contrast to prior contrastive learning which does image-to-image contrastive learning (Jiang et al., 2020; Kim et al., 2020), we consider a cross-modal text-to-image contrastive learning paradigm. We first generate a set of adversarial examples conditioned on the text inputs which are targeted to fool the model about the correspondence between image features and text embeddings. TeCoA then tries to minimize the feature distance between the attacked image and the correct corresponding text inputs contrastively (Figure 3). We provide additional discussion of this text-image contrastive objective in Appendix A.5.
Concretely, given a set of image and text pairs {(xi, ti)}, we use the pretrained CLIP model to encode each pair with an image encoder Fθ and a text encoder T . We then have the following image-to-text contrastive loss function,
Ls(x, t,y) = −Ei,j [ yij log exp(cos(z (I) i , z (T ) j )/τ)∑
k exp(cos(z (I) i , z (T ) k )/τ)
] , (3)
where the z(I)i = Fθ(xi) are the features of the input image, and z (T ) i = T (ti) are the features from the input text. We use yij to indicate which image-text pairs are positive and which are negative; this indicator satisfies yij = 1 if and only if the examples i = j, and 0 otherwise. τ is a scalar hyper-parameter, and cos denotes the cosine similarity function.
Constructing Adversarial Examples for Image-Text Pair. Instead of maximizing the standard cross-entropy loss, we maximize the image-text contrastive loss given a batch of natural images x and text t:
xa = argmax xa
Ls(xa, t,y), s.t. ||xa − x||q < ϵ. (4)
Here, for image xi in the batch, the associated text ti can be its natural label text or constructed via the standard prompts for the zero-shot tasks (e.g., “a photo of a [LABEL]”). The indicator yij = 1 when image xi has category j as its ground-truth class, and 0 otherwise. In practice, we find this objective is effective at generating adversarial examples for zero-shot image recognition tasks.
Text-Guided Contrastive Adversarial Training. Once we have the adversarial examples, we optimize the parameters θ of the vision encoder Fθ to minimize the aforementioned objective (Equa-
tion 8) on the generated adversarial examples,
θ = argmin θ
Ls(xa, t,y). (5)
Our algorithm iteratively alternates between generating adversarial examples and updating the model via Equation 5. Since TeCoA uses additional information from the text embeddings to correct the visual features corrupted by the adversarial attacks, it helps the model to retain zero-shot transferability regarding adversarial robustness.
Contrastive Adversarial Training without Text. To validate whether the gain of zero-shot robustness comes from language supervision or is due to the formulation of the contrastive loss itself, we consider two variants of contrastive losses which do not utilize languages in our experiments. One is a contrastive adversarial training loss with one-hot labels (CoAdv.), where the label information is encoded as a one-hot vector and is used to contrast with the image features. Another is based on Jiang et al. (2020), where the model is finetuned on adversarial examples to fool the image-toimage contrastive loss, denoted as ImgCoAdv. More details are presented in Section 4.
4 EXPERIMENTS
We start by describing our experiment setups and present the experimental results of 16 datasets in Section 4.1. We observe that CLIP adapted with TeCoA achieves an average of 31 points improvement across the datasets over the original CLIP model. In Section 4.2, we provide extensive analysis over the design choices involved in our approach and identify that the use of language in TeCoA largely improves the model’s zero-shot adversarial robustness.
Datasets. We evaluate the zero-shot adversarial robustness conferred by TeCoA trained on ImageNet (Deng et al., 2009a) and report the performance of the models on the ImageNet test set as well as 15 zero-shot test datasets, covering a diverse range of recognition tasks. Specifically, we include CIFAR10, CIFAR100 (Krizhevsky et al., 2009), STL10 (Coates et al., 2011), Caltech101 (Fei-Fei et al., 2004), and Caltech256 (Griffin et al., 2007) for generic classification; OxfordPets (Parkhi et al., 2012), StanfordCars (Krause et al., 2013), Food101 (Bossard et al., 2014), Flowers102 (Nilsback & Zisserman, 2008), and FGVCAircraft (Maji et al., 2013) for fine-grained classification; SUN397 (Xiao et al., 2010) for scene recognition; and DTD (Cimpoi et al., 2014) for texture recognition. Finally, we include three datasets with domain-specialized tasks, PatchCamelyon (PCAM, lymph node tumor detection) (Veeling et al., 2018), HatefulMemes (hatespeech detection) (Kiela et al., 2020), and EuroSAT (satellite image classification) (Helber et al., 2017).
Baselines. We consider multiple variants to adapt CLIP (Radford et al., 2021). For adaptation methods, we consider visual prompting (VP) (Bahng et al., 2022), linear probing (LP) and finetuning (FT). Since LP involves training a linear readout layer on the target task, it is not zero-shot, but we still evaluate it for reference. For training losses, we consider
(1) vanilla cross-entropy loss (CE); (2) standard adversarial training loss (Adv.) with the cross-entropy loss; (3) contrastive adversarial training loss (CoAdv.); (4) contrastive adversarial training over images (ImgCoAdv.); (5) our text-guided contrastive adversarial training (TeCoA).
In our experiments, we name these variants by adaptation(loss). Detailed formulations and associated training algorithms for each loss can be found in Section A.3.
Implementation Details. We use CLIP-B/32 architecture. We optimize the model using an SGD optimizer with momentum 0.9. We train for 10 epochs. For finetuning, we use a learning rate of 1e − 5. For visual prompt tuning, we use a learning rate of 40 and a batch size of 256. For prompt tuning on the entire ImageNet dataset, we use token-level prompt with size 200, while for subsets of ImageNet (1K, 5K, and 50K images), we use token-level prompt with size 5 and a smaller batch size of 64. Unless specified, during adversarial training, we generate Linf = 1/255 bounded attacks using a 2-step PGD attack (Madry et al., 2018) with step size α = 1/255. We test the robustness of our model using 100 steps of PGD attack, with step size α = 1/255.
4.1 EXPERIMENTAL RESULTS
We first use PGD attack with 100 steps to evaluate the robustness of CLIP models that are adapted on ImageNet. In Table 1, we show the robust accuracies for the models on the Imagenet test set and 15 zero-shot recognition tasks (i.e., the training classes and test classes are non-overlapping). Each row is the robust accuracy for one method, where we compare the proposed TeCoA with VP and FT against 10 baselines and their variants. We bold the best results for each dataset (column).
From Table 1, we can see that most adapted models with adversarial training losses except for ImgCoAdv. have better robust accuracies compared to the standard cross-entropy loss. If using adversarial training, the vanilla FT (Adv.), which finetunes the entire model with adversarial training, achieves the best adversarial robustness on ImageNet (non zero-shot data, same as the training tasks) while the average robust accuracy is 10.62%, which is only slightly better than the original CLIP (6.57%). In the meantime, VP(Adv.) achieves much better results, improving the
Im ag
eN et
CIF AR
10 ST L10
CIF AR
10 0
SU N3
97 Sta nfo rdC ars Foo d1 01 Ox for dP ets Flo we r10 2 DT D Eu roS AT FG VC Ai rcr aft
Ha
tef ulM em es
Pat
chC am ely on Ca lte ch1 01 Ca lte ch2 56
0
20
40
60
80
100
Ro bu
st A
cc ur
ac y
(% )
Im ag
eN et
CIF AR
10 ST L10
CIF AR
10 0
SU N3
97 Sta nfo rdC ars Foo d1 01 Ox for dP ets Flo we r10 2 DT D Eu roS AT FG VC Ai rcr aft
Ha
tef ulM em es
Pat
chC am ely on Ca lte ch1 01 Ca lte ch2 56
0
20
40
60
80
100
Cl ea
n Ac
cu ra
cy (%
)
= 1/255 = 2/255 = 4/255
Figure 5: Zero-shot adversarial robustness under different perturbation bounds (ϵ = 1, 2, 4/255). We vary the perturbation bound for adversarial finetuning with TeCoA. Each adapted model is evaluated under attacks from the same bound seen during training. We show both the robust accuracy (left) and clean accuracy (right). Our defense is still effective on zero-shot tasks when the perturbation gets larger.
Im ag
eN et
CIF AR
10 ST L10
CIF AR
10 0
SU N3
97 Sta nfo rdC ars Foo d1 01 Ox for dP ets Flo we r10 2 DT D Eu roS AT FG VC Ai rcr aft Ha tef ulM em es Pat chC am ely on Ca lte ch1 01 Ca lte ch2 56
0
20
40
60
80
100
Ro bu
st A
cc ur
ac y
(% ) Image Prompt (Added) Token Prompt (Appended)
(a) Visual Prompt Design
Im ag
eN et
CIF AR
10 ST L10
CIF AR
10 0
SU N3
97 Sta nfo rdC ars Foo d1 01 Ox for dP ets Flo we r10 2 DT D Eu roS AT FG VC Ai rcr aft Ha tef ulM em es Pat chC am ely on Ca lte ch1 01 Ca lte ch2 56
0
20
40
60
80
100
Ro bu
st A
cc ur
ac y
(% ) Partial Finetuning Visual Prompting
(b) Partially FT vs. VP
Figure 6: (a, left) We conduct an ablation study of whether to add visual prompt to the input image or to append prompt token to the input sequence. (b, right) We optimize the same amount of parameters in partial finetuning and VP, we find VP is more effective when only a small number of parameters are optimized.
accuracy number from 6.57% to 31.84%. This indicates that visual prompting is more powerful than finetuning when coupled with the standard adversarial training.
Within each set of the variants using the same adaptation method, we compare the effectiveness of different training losses. We notice that both CoAdv. and ImgCoAdv. are much worse than the proposed TeCoA and even the standard adversarial training (Adv.). This indicates that the formulation of contrastive learning may not necessarily help improve the zero-shot adversarial robustness and the use of the language supervision might.
Overall, we find that adapting CLIP with TeCoA using model finetuning presents the best performance across the datasets, improving the accuracy number from 6.57% to 38.18%, roughly 31 points. This might look counter-intuitive as VP significantly outperforms FT under the standard adversarial training without texts. One hypothesis is that with sufficient semantic information at test time, finetuning that directly modifies the model parameters may be more expressive than just modifying the inputs given more model parameters are tuned. In Table 3 in Appendix, we show the clean accuracy of the same models, which gives similar trend as the robust accuracies. We also show results under ϵ = 1/255 and ϵ = 4/255 using AutoAttack Croce & Hein (2020) in Table 4 and Table 5, where our model is still more robust than baselines, up to an average of 36 points. We describe details in Section A.2.
4.2 ANALYSIS
Training Set Size is an important factor when adapting large-scale models. In Figure 4, we show the results of adapting CLIP with TeCoA with 1, 5, 50, and 1000 shot per category on ImageNet. We observe that increasing the amount of training data improves robust accuracy. Moreover, we also find that FT(TeCoA) outperforms the non-adapted CLIP even when the model is adapted with just one shot per class (the blue bars in Figure 4).
Effect of Attack Strength. To validate whether our method works when the adversarial perturbations become larger, we increase the perturbation bound for our TeCoA adversarial training. Figure 5 shows that, while increasing the attack strength decreases the robust accuracy, our model can still transfer robustness to zero-shot tasks.
Visual Prompt Designs. There are two ways to design visual prompts. One is to append additional tokens to image tokens and the other is to add small decoration (i.e., learnable noises) to the in-
put images. From Figure 6a, we can see appending learnable tokens in the input token sequences achieves consistently better performance than just adding the prompt value to the images.
Number of Adapted Parameters. The number of parameters that are adapted during training highly affects model performance. VP is light-weight, as it only modifies the inputs, while FT may either adjust part of the model or the entire model. In Figure 6b, we show the comparison of partially finetuning and visual prompting of CLIP. We can see that with the same amount of parameters adapted, visual prompting is more effective in adapting the large scale model for zeroshot adversarial robustness.
Are Labels Required for Adapting CLIP? Unlabeled data is rich. We investigate if it’s necessary to use the groundtruth labels from images. We generate the pseudo text labels that CLIP retrieves from clean images, where we show details in Section A.3.5. We then run our TeCoA on the pseudo labels in the same way as the labelled data. We show experimental results in Table 2, where we obtain a similar zero-shot robust accuracy even without labels.
Trading off between Robust Accuracy and Clean Accuracy. Similar to typical adversarial training, TeCoA also poses a trade-off between the clean accuracy and the robust accuracy Tsipras et al. (2019). Ideally, we want to be able to dynamically adjust this trade-off depending on the desired level of adversarial robustness. Here we can balance this trade-off by using model weights interpolation (Wortsman et al., 2022). In Figure 7, we can see there is a sweet spot in interpolating the model where we improve both the robustness and clean accuracy, marked by the star in the figure.
5 CONCLUSION
In this paper, we provided a holistic study of the zero-shot adversarial robustness problem of largescale vision-language models. We identified the effects of various adaption methods and training losses when adapting models, and conjectured that existing methods failed to generalize to new tasks due to the lack of the language supervision. We proposed a text-guided contrastive adversarial training (TeCoA) which can be used with model finetuning and visual prompting to drastically improve the zero-shot adversarial robustness of CLIP. Extensive experimental evaluation showed the effectiveness of TeCoA, and the detailed analyses provide useful lessons for adapting large-scale models to improve their zero-shot adversarial robustness, shedding light on this important problem.
6 ACKNOWLEDGEMENT
This research is based on work partially supported by the DARPA SAIL-ON program, the DARPA MCS program, the NSF NRI Award #2132519, a GE/DARPA grant, a CAIT grant, and gifts from JP Morgan, DiDi, and Accenture.
Generative_Dual_Adversarial_Network_for_Generalized_Zero-Shot_
Learning_CVPR_2019_paper.html.
Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In International Conference on Machine Learning, pp. 4904–4916. PMLR, 2021.
Menglin Jia, Luming Tang, Bor-Chun Chen, Claire Cardie, Serge Belongie, Bharath Hariharan, and Ser-Nam Lim. Visual prompt tuning. arXiv preprint arXiv:2203.12119, 2022.
Ziyu Jiang, Tianlong Chen, Ting Chen, and Zhangyang Wang. Robust pre-training by adversarial contrastive learning. Advances in Neural Information Processing Systems, 33:16199–16210, 2020.
Douwe Kiela, Hamed Firooz, Aravind Mohan, Vedanuj Goswami, Amanpreet Singh, Pratik Ringshia, and Davide Testuggine. The hateful memes challenge: Detecting hate speech in multimodal memes. Advances in Neural Information Processing Systems, 33:2611–2624, 2020.
Minseon Kim, Jihoon Tack, and Sung Ju Hwang. Adversarial self-supervised contrastive learning. Advances in Neural Information Processing Systems, 33:2983–2994, 2020.
Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In 4th International IEEE Workshop on 3D Representation and Recognition (3dRR-13), Sydney, Australia, 2013.
Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. Adversarial examples in the physical world. CoRR, abs/1607.02533, 2017.
Christoph H. Lampert, Hannes Nickisch, and Stefan Harmeling. Learning to detect unseen object classes by between-class attribute transfer. In 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2009), 20-25 June 2009, Miami, Florida, USA, pp. 951–958. IEEE Computer Society, 2009. doi: 10.1109/CVPR.2009.5206594. URL https://doi.org/10.1109/CVPR.2009.5206594.
Matthew Lawhon, Chengzhi Mao, and Junfeng Yang. Using multiple self-supervised tasks improves model robustness. arXiv preprint arXiv:2204.03714, 2022.
Bo Liu, Qiulei Dong, and Zhanyi Hu. Zero-shot learning from adversarial feature residual to compact visual feature. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pp. 11547–11554. AAAI Press, 2020. URL https://ojs.aaai.org/index.php/AAAI/article/view/6821.
Ruoshi Liu, Sachit Menon, Chengzhi Mao, Dennis Park, Simon Stent, and Carl Vondrick. Shape analysis by shadow synthesis.
Ruoshi Liu, Chengzhi Mao, Purva Tendulkar, Hao Wang, and Carl Vondrick. Landscape learning for neural network inversion. arXiv e-prints, pp. arXiv–2206, 2022.
Yang Liu, Jishun Guo, Deng Cai, and Xiaofei He. Attribute attention for semantic disambiguation in zero-shot learning. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019, pp. 6697–6706. IEEE, 2019. doi: 10.1109/ICCV.2019.00680. URL https://doi.org/10.1109/ICCV.2019.00680.
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In ICLR, 2018.
S. Maji, J. Kannala, E. Rahtu, M. Blaschko, and A. Vedaldi. Fine-grained visual classification of aircraft. Technical report, 2013.
Chengzhi Mao, Ziyuan Zhong, Junfeng Yang, Carl Vondrick, and Baishakhi Ray. Metric learning for adversarial robustness. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
Chengzhi Mao, Amogh Gupta, Vikram Nitin, Baishakhi Ray, Shuran Song, Junfeng Yang, and Carl Vondrick. Multitask learning strengthens adversarial robustness. In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm (eds.), Computer Vision – ECCV 2020, pp. 158– 174, Cham, 2020. Springer International Publishing.
Chengzhi Mao, Mia Chiquier, Hao Wang, Junfeng Yang, and Carl Vondrick. Adversarial attacks are reversible with natural supervision. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 661–671, 2021.
Chengzhi Mao, Lingyu Zhang, Abhishek Joshi, Junfeng Yang, Hao Wang, and Carl Vondrick. Robust perception through equivariance, 2022. URL https://arxiv.org/abs/2212. 06079.
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks, 2016.
Jian Ni, Shanghang Zhang, and Haiyong Xie. Dual adversarial semantics-consistent network for generalized zero-shot learning. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc, Emily B. Fox, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pp. 6143–6154, 2019. URL https://proceedings.neurips.cc/paper/2019/hash/ c46482dd5d39742f0bfd417b492d0e8e-Abstract.html.
Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. In 2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing, pp. 722–729. IEEE, 2008.
Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018.
Mark Palatucci, Dean Pomerleau, Geoffrey E Hinton, and Tom M Mitchell. Zero-shot learning with semantic output codes. In Y. Bengio, D. Schuurmans, J. Lafferty, C. Williams, and A. Culotta (eds.), Advances in Neural Information Processing Systems, volume 22. Curran Associates, Inc., 2009. URL https://proceedings.neurips.cc/paper/2009/file/ 1543843a4723ed2ab08e18053ae6dc5b-Paper.pdf.
Tianyu Pang, Xiao Yang, Yinpeng Dong, Hang Su, and Jun Zhu. Bag of tricks for adversarial training, 2020.
Nicolas Papernot, Patrick D. McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Ananthram Swami. The limitations of deep learning in adversarial settings. arXiv:1511.07528, 2015.
Omkar M Parkhi, Andrea Vedaldi, Andrew Zisserman, and CV Jawahar. Cats and dogs. In 2012 IEEE conference on computer vision and pattern recognition, pp. 3498–3505. IEEE, 2012.
Anibal Pedraza, Oscar Deniz, and Gloria Bueno. On the relationship between generalization and robustness to adversarial examples. Symmetry, 13(5):817, 2021.
Hieu Pham, Zihang Dai, Golnaz Ghiasi, Hanxiao Liu, Adams Wei Yu, Minh-Thang Luong, Mingxing Tan, and Quoc V Le. Combined scaling for zero-shot transfer learning. arXiv preprint arXiv:2111.10050, 2021.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pp. 8748–8763. PMLR, 2021.
Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical textconditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 2022.
Leslie Rice, Eric Wong, and J. Zico Kolter. Overfitting in adversarially robust deep learning, 2020.
Bernardino Romera-Paredes and Philip H. S. Torr. An embarrassingly simple approach to zeroshot learning. In Francis R. Bach and David M. Blei (eds.), Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, volume 37 of JMLR Workshop and Conference Proceedings, pp. 2152–2161. JMLR.org, 2015. URL http://proceedings.mlr.press/v37/romera-paredes15.html.
Hadi Salman, Andrew Ilyas, Logan Engstrom, Ashish Kapoor, and Aleksander Madry. Do adversarially robust imagenet models transfer better? Advances in Neural Information Processing Systems, 33:3533–3545, 2020.
Mark Sandler, Andrey Zhmoginov, Max Vladymyrov, and Andrew Jackson. Fine-tuning image transformers using learnable memory. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12155–12164, 2022.
Edgar Schonfeld, Sayna Ebrahimi, Samarth Sinha, Trevor Darrell, and Zeynep Akata. Generalized zero-and few-shot learning via aligned variational autoencoders. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8247–8255, 2019.
Manli Shu, Weili Nie, De-An Huang, Zhiding Yu, Tom Goldstein, Anima Anandkumar, and Chaowei Xiao. Test-time prompt tuning for zero-shot generalization in vision-language models, 2022. URL https://arxiv.org/abs/2209.07511.
David Stutz, Matthias Hein, and Bernt Schiele. Disentangling adversarial robustness and generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
Dong Su, Huan Zhang, Hongge Chen, Jinfeng Yi, Pin-Yu Chen, and Yupeng Gao. Is robustness the cost of accuracy?–a comprehensive study on the robustness of 18 deep image classification models. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 631–648, 2018.
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv:1312.6199, 2013.
Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. Robustness may be at odds with accuracy, 2019.
Jonathan Uesato, Jean-Baptiste Alayrac, Po-Sen Huang, Robert Stanforth, Alhussein Fawzi, and Pushmeet Kohli. Are labels required for improving adversarial robustness? CoRR, 2019.
Pratik Vaishnavi, Kevin Eykholt, and Amir Rahmati. Transferring adversarial robustness through robust representation matching. arXiv preprint arXiv:2202.09994, 2022.
Bastiaan S Veeling, Jasper Linmans, Jim Winkens, Taco Cohen, and Max Welling. Rotation equivariant cnns for digital pathology. In International Conference on Medical image computing and computer-assisted intervention, pp. 210–218. Springer, 2018.
Vinay Kumar Verma, Dhanajit Brahma, and Piyush Rai. A meta-learning framework for generalized zero-shot learning. CoRR, abs/1909.04344, 2019. URL http://arxiv.org/abs/1909. 04344.
Hao Wang, Chengzhi Mao, Hao He, Mingmin Zhao, Tommi S Jaakkola, and Dina Katabi. Bidirectional inference networks: A class of deep bayesian networks for health profiling. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp. 766–773, 2019.
Mitchell Wortsman, Gabriel Ilharco, Jong Wook Kim, Mike Li, Simon Kornblith, Rebecca Roelofs, Raphael Gontijo Lopes, Hannaneh Hajishirzi, Ali Farhadi, Hongseok Namkoong, et al. Robust fine-tuning of zero-shot models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7959–7971, 2022.
Yongqin Xian, Tobias Lorenz, Bernt Schiele, and Zeynep Akata. Feature generating networks for zero-shot learning. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pp. 5542– 5551. Computer Vision Foundation / IEEE Computer Society, 2018. doi: 10.1109/CVPR. 2018.00581. URL http://openaccess.thecvf.com/content_cvpr_2018/html/ Xian_Feature_Generating_Networks_CVPR_2018_paper.html.
Jianxiong Xiao, James Hays, Krista A Ehinger, Aude Oliva, and Antonio Torralba. Sun database: Large-scale scene recognition from abbey to zoo. In 2010 IEEE computer society conference on computer vision and pattern recognition, pp. 3485–3492. IEEE, 2010.
Guo-Sen Xie, Li Liu, Xiaobo Jin, Fan Zhu, Zheng Zhang, Jie Qin, Yazhou Yao, and Ling Shao. Attentive region embedding network for zero-shot learning. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pp. 9384–9393. Computer Vision Foundation / IEEE, 2019. doi: 10.1109/CVPR.2019.00961. URL http://openaccess.thecvf.com/content_CVPR_2019/html/Xie_ Attentive_Region_Embedding_Network_for_Zero-Shot_Learning_CVPR_ 2019_paper.html.
Yunlong Yu, Zhong Ji, Yanwei Fu, Jichang Guo, Yanwei Pang, and Zhongfei (Mark) Zhang. Stacked semantics-guided attention model for fine-grained zero-shot learning. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pp. 5998–6007, 2018. URL https://proceedings.neurips.cc/paper/2018/hash/ 9087b0efc7c7acd1ef7e153678809c77-Abstract.html.
Mehmet Kerim Yucel, Ramazan Gokberk Cinbis, and Pinar Duygulu. A deep dive into adversarial robustness in zero-shot learning. In European Conference on Computer Vision, pp. 3–21. Springer, 2020.
Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P. Xing, Laurent El Ghaoui, and Michael I. Jordan. Theoretically principled trade-off between robustness and accuracy. arXiv abs/1901.08573, 2019.
Chunting Zhou, Junxian He, Xuezhe Ma, Taylor Berg-Kirkpatrick, and Graham Neubig. Prompt consistency for zero-shot task generalization. arXiv preprint arXiv:2205.00049, 2022a.
Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Conditional prompt learning for vision-language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16816–16825, 2022b.
Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Learning to prompt for visionlanguage models. International Journal of Computer Vision, 130(9):2337–2348, 2022c.
A APPENDIX
A.1 EXPERIMENTS
A.1.1 ZERO-SHOT CLEAN ACCURACY OF OUR ADAPTED MODEL
We show the results for accuracy on clean images in Table 3.
A.2 AUTOATTACK EXPERIMENT
We also consider a stronger attack AutoAttack Croce & Hein (2020) in our evaluation. Since our method uses adversarial training and does not rely on the obfuscated gradient, we use two APGD variants, APGD-CE and APGD-DLR, in AutoAttack to evaluate. We show robust accuracy under perturbation bound ϵ = 1/255 in Table 4 and robust accuracy under perturbation bound ϵ = 4/255 in Table 5. For both perturbation bounds, our method achieves higher robust accuracy than vanilla CLIP by up to 36 points on average.
Notably, on ϵ = 1/255, even evaluated under the stronger AutoAttack, the robustness accuracy of FT (TeCoA) is 37.02, which is still higher than all other baselines methods from Table 1 in robust accuracy, even though those baselines are evaluated under a weaker PGD100 attack. A larger perturbation bound ϵ = 4/255 makes the attack stronger, where our method still improves robustness by an average of 9 points. In addition, while the AutoAttack significantly reduces the robust accuracy of CLIP from 6.57 to 0.53, it only slightly decreases our TeCoA’s robust accuracy: 2.1 points for visual prompt tuning and 1.16 for finetuning model (see Table 1 and Table 4).
One reason for AutoAttack to be so effective in attacking vanilla CLIP than PGD100 is because it uses a fractional attack vector, which is not rounded by 1/255 during the inference. Images are often encoded via integer from 0 to 255, which allows only attack at the integer level. In the main paper, we use PGD attacks with step size 1 (if the image ranges from 0 to 1, then step size is proportionally 1/255) for 100 steps. Since there is no fractional attack value, the attack space is constrained and less effective. This is because the standard image inputs have value resolutions that should be larger than 0.5, and any values smaller than this would be rounded when encoding the images. If we ignore the fact that images will be encoded in integers between 0 to 255, then we can have stronger attacks by exploring the fraction values. Since the AutoAttack automatically reduces the attack step size when loss oscillates, it explores the fraction space and is more effective in the attack.
A.3 TRAINING LOSSES AND ALGORITHMS
We give formulations for the different training algorithms considered in our experiments. Throughout this section, let Fθ denote an image encoder parameterized by θ, and let T denote a frozen text encoder. Let D denote a dataset containing pairs (x, y) of images and their respective one-hot labels.
A.3.1 STANDARD ADVERSARIAL TRAINING WITH CROSS-ENTROPY LOSS (ADV.)
The standard adversarial training paradigm. We initialize a learnable linear layer Cϕ, and append it to Fθ. The classification loss is L(Cϕ(Fθ(xa)),y) defined in cross-entropy loss. We first train Cϕ on standard images. Then, given a natural image x with one-hot label y, we generate an attacked image xa by maximizing the loss L(Cϕ(Fθ(xa)),y). We then update θ to minimize the loss L(Cϕ(Fθ(xa)),y). We describe our algorithm in Algorithm 1.
Algorithm 1 Standard Adversarial Training (Adv.) Input: Dataset D, learnable parameter θ, model F , parameter of projector Cϕ
for all iter ∈ preset number of training epochs do for all x,y ∈ minibatch do
xa = argmaxxa L(Cϕ(Fθ(x a)),y) ▷ Generating adversarial attacks
θ = θ −∇θL(Cϕ(Fθ(xa)),y) ▷ Training on generated adversarial examples end for
end for
A.3.2 CONTRASTIVE ADVERSARIAL TRAINING LOSS (COADV.)
We study how much the contrastive learning objective contributes to the zero-shot robustness gain. Instead of using one-hot label y and cross-entropy loss in our objective, we create a dictionary of embeddings E by random initialization, where each embedding ei denotes the code representation for the category yi. We optimize the following contrastive learning loss:
Ls(x, E ,y) = −Ei,j [ yij log exp(cos(z (I) i , ej)/τ)∑
k exp(cos(z (I) i , ej)/τ)
] , (6)
where the z(I)i = Fθ(xi) are the features of the input image, and ej are the code representation from the dictionary. We use yij to indicate which image-code pairs are positive and which are negative; this indicator satisfies yij = 1 if and only if the examples i = j, and 0 otherwise. τ is a scalar hyper-parameter, and cos denotes the cosine similarity function. We describe our algorithm in Algorithm 2.
Algorithm 2 Contrastive Adversarial Training Loss (CoAdv.) Input: Dataset D, learnable parameter θ, model F
for all iter ∈ preset number of training epochs do for all x,y ∈ minibatch do
xa = argmaxxa Ls(x, E ,y) ▷ Generating adversarial attacks for contrastive loss θ = θ −∇θLs(x, E ,y) ▷ Contrastive learning on generated adversarial examples
end for end for
A.3.3 CONTRASTIVE ADVERSARIAL TRAINING OVER IMAGES (IMGCOADV.)
Prior work Jiang et al. (2020) uses image-only contrastive adversarial learning to obtain robustness. We adapt this method as a baseline to study whether using the knowledge from only images — not language — can achieve zero-shot robustness. For each image xi, we create a transformationed xj , and form the image pair (xi, xj).
We use the same visual encoder to embed the images xi and xj to obtain the features zi and zj . We then construct the following contrastive learning loss:
Ls(xi,xj ,y) = −Ei,j [ yij log
exp(cos(zi, zj)/τ)∑ k exp(cos(zi, zj)/τ)
] , (7)
where the zi = Fθ(xi) are the features of the input image. We use yij to indicate which image pairs are positive and which are negative; this indicator satisfies yij = 1 if and only if the images xi and xj are augmented from the same instance, and 0 otherwise. τ is a scalar hyper-parameter, and cos denotes the cosine similarity function.
Let zai = Fθ(x a i ), where x a i denotes the generated adversarial examples. Then we can obtain the adversarial examples via:
xai = argmax xai Ls(xi,xj ,y) = argmax xai
−Ei,j [ yij log
exp(cos(zai , zj)/τ)∑ k exp(cos(z a i , zj)/τ)
] , (8)
Once we generate the adversarial images, we conduct contrastive learning on adversarial images and the paired clean images using Equation 7.
We introduce our algorithm in Algorithm 3.
A.3.4 TEXT-GUIDED CONTRASTIVE ADVERSARIAL TRAINING (TECOA)
We describe the TeCoA training algorithm in Algorithm 4. We denote the learnable parameters to be θ. For the visual prompt tuning, θ is only the prompt vector. For the finetuning method, θ is the parameter of the whole model.
Algorithm 3 Contrastive Adversarial Training over Images (ImgCoAdv.) Input: Dataset D, learnable parameter θ, model F
for all iter ∈ preset number of training epochs do for all x ∈ minibatch do
xai = argmaxxai Ls(x a i ,xj ,y) ▷ Generating adversarial attacks for contrastive loss
θ = θ −∇θLs(xai ,xj ,y) ▷ Contrastive learning on generated adversarial examples end for
end for
Algorithm 4 TeCoA Training Input: Dataset D, learnable parameter θ, model F , text t
for all iter ∈ preset number of training epochs do for all x,y ∈ minibatch do
xa = argmaxx Ls(x, t,y) ▷ Generating adversarial attacks for contrastive loss θ = θ −∇θLs(xa, t,y) ▷ Contrastive learning on generated adversarial examples
end for end for
A.3.5 TECOA LEARNING ON UNLABELED DATA
Given an unlabeled image, we first provide a list of text using the possible category names:
A photo of a {Category Name}.
Since the unlabeled images are not attacked, CLIP can retrieve the nearest text embedding from the image embedding and use the text as the pseudo label for the image. We then conduct the TeCoA training on the images and their pseudo text label. We describe the algorithm below in Algorithm 5.
Algorithm 5 TeCoA Training on Unlabeled Data Input: Dataset D without label, learnable parameter θ, model F , text t.
for all iter ∈ preset number of training epochs do for all x ∈ minibatch B = {x1, . . . , xm} do
y = argminy Ls(x, t,y) ▷ Finding pseudo label for the clean images using CLIP xa = argmaxx Ls(x, t,y) ▷ Generating adversarial attacks for contrastive loss θ = θ −∇θLs(xa, t,y) ▷ Contrastive learning on generated adversarial examples
end for end for
A.4 DISCUSSION FOR RESULTS
In Table 1, our TeCoA performs better than existing methods except for LP(CE) and VPT(Adv.) on HateMemes and PCAM datasets. This is because the HateMemes dataset is a binary classification task for detecting hateful speech, which is a very different domain from the ImageNet classification. Since both LP(CE) and VPT (adv) only adapt a small number of parameters on the ImageNet set, the resulting model may overfit less to the Image recognition task, and just perform random guessing. Note that the 54% accuracy is close to random guessing 50%. In addition, PCAM is a binary classification for lymph nodes (medical image, https://github.com/basveeling/pcam), which is also a very different domain from ImageNet. Similar to HateMemes, adapting fewer parameters makes the model learn less and overfit less, where the 52.5% accuracy is close to random guessing 50%. Thus, both datasets remain a big challenge for all existing zero-shot robust classification tasks.
A.5 DISCUSSION FOR TECOA LOSS
In the main paper, we interpret our loss through the image-text contrastive objective, which first conducts a matrix multiplication between the image embedding and language embedding, and then applies a cross-entropy loss on the output. Since the text embedding is fixed, this embedding can be treated as a layer of linear classifier, whose weights are obtained from the language embedding. This image-text contrastive objective can also be interpreted as using cross-entropy loss on a fixed readout layer that is initialized with the right language knowledge. This further validates the importance of language information for zero-shot adversarial robustness.
A.6 ADAPTATION METHOD FORMULATION
Token-Level Visual Prompts. Token-level visual prompts adapt transformer-based models by appending tokens to the input token sequence. This is the most effective prompt method in our experiments, where we use this by default unless specified. Our visual prompts append additional tokens Pk to the input sequence x of the vision transformer:
x = [x;P0, P1, ..., Pk] (9)
The remaining transformer parameters and computations are kept the same as the original.
Image-Level Visual Prompts. Image-level visual prompts adapt transformer models by adding prompt to the input pixels. This is less effective way of adding prompt, as discussed in Figure 6a. Let the prompt token be P , and input image be x the visual prompt is added to the input image:
x = x+ P (10) The remaining transformer parameters and computations are kept the same as the original.
Finetuning. This is the standard way of adapting a model, where all parameters of the model is updated with relatively small learning rate. In our experiments, we find learning rate of 1e − 5 achieves the best performance. | 1. What is the focus of the paper regarding zero-shot visual recognition?
2. What are the strengths of the proposed approach, particularly its performance and ablation studies?
3. Are there any weaknesses or minor issues in the paper? If so, what are they?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
CLIP has shown remarkable performance on zero-shot visual recognition. However, adversarial examples still greatly affect CLIP's performance. This work propose a text-guided contrastive adversarial training loss to adopt CLIP to attain adversarial robustness for the datasets that are not seen during adversarial training. They show that a naive adversarial training of CLIP on ImageNet achieves the best performance on ImageNet but fails on other classification datasets. Their approach, on the other hand, performs much better for zero-shot tasks, despite being slightly worse on ImageNet than the naive adversarial training. They evaluate their zero-shot performance on 15 image datasets, and perform comprehensive ablation studies of their method.
Strengths And Weaknesses
Strength
Zero-shot adversarial robustness is an important problem, given the fact that large-scale models are becoming some sort of infrastructure in practice.
Their approach empirically performs well compared to other baselines.
They even show that their approach doesn't require ground truth labels, and using psudo-labels (via CLIP image-to-text retrieval) is enough to attain similar performance.
They have done comprehensive ablation study including:
The effect of text supervision in contrastive adversarial training.
The effect of visual prompt design (e.g. appending an additional token is better than adding a learnable noise to the raw input image.)
These ablation studies provide useful insights for future research in zero-shot adversarial robustness.
Weaknesses
No major weaknesses. See Clarify and Minor for small issues.
Clarity, Quality, Novelty And Reproducibility
Clarity
For contributions: "our analysis provides useful lessons to understand the problem of zero-shot adversarial robustness."
Their contributions would be easier to understand if the authors could list specific lessons instead (or highlight one or two most important lessons).
I assume y_ij = -1 when i is not equal to j, but this is not explicitly mentioned.
Novelty
The problem setting (e.g. zero-shot adversarial robustness) and their approach is novel.
Reproducibility
They have enough details (e.g. Implementation Details section) to reproduce their results.
Minor
"finetuning has higher gains than VPT as more parameters are tuned." -> "Finetuning has..."
Figure 7: "By change(-ing) the interpolation ratio for the adapted CLIP and the vanilla CLIP," |
ICLR | Title
HyperMAML: Few-Shot Adaptation of Deep Models with Hypernetworks
Abstract
The aim of Few-Shot learning methods is to train models which can easily adapt to previously unseen tasks, based on small amounts of data. One of the most popular and elegant Few-Shot learning approaches is Model-Agnostic Meta-Learning (MAML). The main idea behind this method is to learn the general weights of the meta-model, which are further adapted to specific problems in a small number of gradient steps. However, the model’s main limitation lies in the fact that the update procedure is realized by gradient-based optimisation. In consequence, MAML cannot always modify weights to the essential level in one or even a few gradient iterations. On the other hand, using many gradient steps results in a complex and time-consuming optimization procedure, which is hard to train in practice, and may lead to overfitting. In this paper, we propose HyperMAML, a novel generalization of MAML, where the training of the update procedure is also part of the model. Namely, in HyperMAML, instead of updating the weights with gradient descent, we use for this purpose a trainable Hypernetwork. Consequently, in this framework, the model can generate significant updates whose range is not limited to a fixed number of gradient steps. Experiments show that HyperMAML outperforms MAML in most cases and performs comparably to other state-of-the-art techniques in a number of standard Few-Shot learning benchmarks.
1 INTRODUCTION
In the typical Few-Shot learning setting, the aim is to adapt to new tasks under the assumption that only a few examples are given. As we know, people typically learn new tasks easily by using only a few training examples. On the contrary, a standard deep neural network must be trained on an extensive amount of data to obtain a similar accuracy. Thus, the aim of Few-Shot learning models is to bring neural networks closer to the human brain’s capabilities. The most famous and, in our opinion, the most elegant approach to Few-Shot learning is Model-Agnostic Meta-Learning (MAML) (Finn et al., 2017), where the model is trained to adapt universal weights to new Few-Shot learning tasks quickly. It seems that the brain’s neural networks can adapt to new tasks too, by applying the fact that during the process of evolution, some of its parts have developed universal weights which are easily adaptable to typical tasks we encounter in real life. Thus, the idea behind MAML gives us a possible insight into the working of the brain.
The fascinating factor of human intelligence is that the human learning process, although still not understood, is clearly not based on the gradient descent algorithm, as we cannot in general backpropagate the information (Lillicrap et al., 2020; Song et al., 2020; Whittington & Bogacz, 2019). Thus, from the biological point of view, the main limitation of MAML is the fact that it uses the gradient descent method for weight updates. The main research problem that we set for ourselves is whether one can modify MAML to be more biologically feasible, i.e. keep its ability to find universal weight but remove the necessity of using gradient-based update methods.
We solve this problem by constructing HyperMAML, a model which replaces the gradient optimization in the update of weights by trainable update procedure with the use of the Hypernetwork paradigm. Hypernetworks, introduced in (Ha et al., 2016) are defined as neural models that generate weights for a separate target network solving a specific task. In our model, HyperMAML, the Hypernetwork aggregates the information from the support set and produces an update to the main model. Thanks to such an approach, we can create various types of updates that are not limited to a
few gradient steps. Moreover, hypernetworks have previously been used as models for biological information processing, which suggests that such models are more biologically plausible than the gradient-based techniques such as MAML (Segovia-Juarez & Conrad, 1999). In practice, MAML works when there exist universal weights that are close enough to the optimal solution for each task. To visualize such a situation, we present a simple 2D example, where a single gradient update fails to sufficiently adapt the model to a given task – see Fig. 3. We cannot effectively switch weight in one gradient step. On the other hand, when we use many gradient steps, we obtain a complex optimization procedure that uses an inner and outer loop. Such a procedure can be seen as second-order optimization, which is complex to train (Finn et al., 2017). Contrary to MAML we do not need an inner loop in the optimization procedure, and consequently, we do not have second-order optimization. We also reduce the number of hyperparameters, which are used in the inner loop of MAML approach, which would need to be tuned in a grid search. As a result, our algorithm obtains better results than the classical MAML algorithm and produces results comparable to other state-of-the-art algorithms.
The contributions of our work can be summarized as follows:
• We introduce HyperMAML, a novel approach to the Few-Shot learning problem by aggregating information from the support set and directly producing weights updates.
• In HyperMAML, we do not use loss calculation or gradient backpropagation for the update to the new task, thus making the model more biologically feasible and computationally efficient.
• We significantly increase the update ability compared to the classical MAML algorithm, as evidenced by the increased accuracy in numerous benchmarks we perform.
2 RELATED WORK
The problem of Meta-Learning and Few-Shot learning (Hospedales et al., 2020; Schmidhuber, 1992; Bengio et al., 1992) has received a growing amount of attention from the scientific community over the recent years, with the abundance of methods emerging as a result. The idea of HyperMAML is influenced by two kinds of such methods:
Model-based methods aim to adapt to novel tasks quickly by utilizing mechanisms such as memory (Ravi & Larochelle, 2017; Santoro et al., 2016; Mishra et al., 2018; Zhen et al., 2020), Gaussian Processes (Rasmussen, 2003; Patacchiola et al., 2020; Wang et al., 2021; Sendera et al., 2021), or generating fast weights based on the support set with set-to-set architectures (Qiao et al., 2017; Bauer et al., 2017; Ye et al., 2021; Zhmoginov et al., 2022). Other approaches combine weight generators with gradient-based optimizers by choosing target weights from a set of templates (Zhao et al., 2020) or optimizing low-dimensional embeddings which condition the target weight generator (Rusu et al., 2019). The fast weights approaches can be interpreted as using Hypernetworks (Ha et al., 2016) – models which learn to generate the parameters of neural networks performing the designated tasks.
Similarly, HyperMAML utilizes a Hypernetwork to generate weights updates for performing specific tasks. The key difference is that in HyperMAML, the Hypernetwork is not the sole source of model weights. Instead, following (Finn et al., 2017), HyperMAML maintains a set of universal weights and uses the hypernetwork to generate the updates to those weights for novel tasks.
Optimization-based methods, such as MetaOptNet (Lee et al., 2019) are based on the idea of an optimization process over the support set within the Meta-Learning framework. Arguably, the most popular of this family of methods is Model-Agnostic Meta-Learning (MAML) (Finn et al., 2017), which inspired a multitude of research and numerous extensions to the original algorithm. This includes various techniques for stabilizing its training and improving performance, such as Multi-Step Loss Optimization, and scheduling the learning rate of the meta-optimizer (Antoniou et al., 2018) , using the Bayesian variant of MAML (Yoon et al., 2018), or making MAML permutation-invariant Ye & Chao (2021).
Due to a need for calculating second-order derivatives when computing the gradient of the metatraining loss, training the classical MAML introduces a significant computational overhead. The authors show that in practice the second-order derivatives can be omitted at the cost of small gradient estimation error and minimally reduced accuracy of the model (Finn et al., 2017; Nichol et al., 2018). Methods such as iMAML and Sign-MAML propose to solve this issue with implicit gradients or Sign-SGD optimization (Rajeswaran et al., 2019; Fan et al., 2021). The optimization process can also be improved by training not only the base initialization of the model but also the optimizer itself – namely, training a neural network that transforms gradients calculated w.r.t. loss of the support set predictions into weight updates (Munkhdalai & Yu, 2017; Munkhdalai et al., 2018; Li et al., 2017; Rajasegaran et al., 2020).
HyperMAML shares a key characteristic with the optimization-based methods – namely, it also utilizes a base set of weights, which are updated to obtain a model fit for a given task. The key difference between HyperMAML and MAML is that while MAML adapts to novel tasks through multiple steps of gradient-based optimization, HyperMAML generates the updates in a single step using a Hypernetwork. This makes HyperMAML more similar to methods like (Li et al., 2017; Munkhdalai & Yu, 2017), which generate weight updates through trained meta-optimizers. However, contrary to those approaches, in HyperMAML the Hypernetwork predicts the weight updates based on (i) latent representation of the support set, (ii) predictions of the base model for the support set, (iii) ground-truth labels of the support examples (see Fig. 2). Thus HyperMAML does not require calculating either the loss function or its gradients during generating of the task-specific weight updates, making it more computationally efficient.
3 HYPERMAML: HYPERNETWORK FOR FEW-SHOT LEARNING
In this section, we present our HyperMAML model for Few-Shot learning. First, we start by presenting background and notations for Few-Shot learning. Then we describe how the MAML algorithm works. Finally, we present HyperMAML, which can be understood as an extension of the classical MAML.
Algorithm 1 MAML - Model-Agnostic Meta-Learning (Finn et al., 2017) Require: D = {Tn}Nn=1: set of training tasks Require: α, β: step size hyper parameters
1: randomly initialize θ 2: while not done do 3: Sample batch of tasks B from D 4: for each task Ti = {Si,Qi} from batch B do 5: Evaluate ∇θLSi(fθ) with respect to examples from Si given loss by equation 2 6: Compute adapted parameters θ′i with gradient descent using formula given by eq. equa-
tion 1 7: end for 8: Update the global parameters of the model θ with formula given by eq. equation 4.
3.1 BACKGROUND
The terminology describing the Few-Shot learning setup is dispersive due to the colliding definitions used in the literature. Here, we use the nomenclature derived from the Meta-Learning literature, which is the most prevalent at the time of writing. Let S = {(xl,yl)}Ll=1 be a support-set containing input-output pairs, with L examples with the equal class distribution. In the one-shot scenario, each class is represented by a single example, and L = K, where K is the number of the considered classes in the given task. Whereas, for Few-Shot scenarios, each class usually has from 2 to 5 representatives in the support set S. Let Q = {(xm,ym)}Mm=1 be a query-set (sometimes referred to in the literature as a target-set), with M examples, where M is typically one order of magnitude greater than K. For clarity of notation, the support and query sets are grouped in a task T = {S,Q}. During the training stage, the models for Few-Shot applications are fed by randomly selected examples from training set D = {Tn}Nn=1, defined as a collection of such tasks.
During the inference stage, we consider task T∗ = {S∗,X∗}, where S∗ is a support set with the known class values for a given task, and X∗ is a set of query (unlabeled) inputs. The goal is to predict the class labels for query inputs x ∈ X∗, assuming support set S∗ and using the model trained on D.
Model-Agnostic Meta-Learning (MAML) is one of the current standard algorithms for Few-Shot learning, which learns the parameters of a model so that it can adapt to a new task in a few gradient steps.
We consider a model represented by a function fθ with parameters θ. In the Few-Shot problems fθ models discriminative probabilities for the classes, fθ(x) = p(y|x, θ). The standard MAML model is trained with the procedure given by Algorithm 1. In each of the training iterations the batch of tasks B is sampled from D. Further, for each task Ti = {Si,Qi} from B, MAML adapts the model’s parameters θ′i that are specific for a given task. The actual values of θ ′ i are calculated using one or more gradient descent updates. In the simplest case of one gradient iteration, the parameters are updated as follows:
θ′i = θ − α∇θLSi(fθ), (1)
where α is the step size that may be fixed as a hyperparameter or meta-learned, and the loss function for a set of observations Z is defined as LZ for the few shot scenario is represented as a simple cross-entropy:
LZ(fθ) = ∑
(xi,l,yi,l)∈Z
K∑ k=1 −yki,l log fθ,k(xi,j), (2)
where fθ,k(xi,j) denotes k-th output of the model fθ, for a given input xi,l, and yi,l is corresponding class in one-hot coding. For simplicity of notation, we will consider one gradient update for the rest of this section, but using multiple gradient updates is a straightforward extension. After calculating the tasks-specific updates θ′i the general model parameters are trained by optimizing for the performance
of fθ′i with respect to θ across tasks from batch B. More concretely, the meta-objective used to train the general parameters of the models is as follows:
LMAML(fθ) = ∑ Ti∈B LQi(fθ′) = ∑ Ti∈B LQi(fθ−α∇θLSi (fθ)), (3)
Note that the meta-optimization is performed over the model parameters θ, whereas the objective is computed using the updated model parameters θ′. In effect, our proposed method aims to optimize the model parameters such that one or a small number of gradient steps on a new task will produce maximally effective behavior on that task.
The meta-optimization across tasks is performed via stochastic gradient descent (SGD) such that the model parameters θ are updated as follows:
θ ← θ − β∇θLMAML(fθ), (4)
where β is the meta step size.
During the inference stage, in order to perform predictions for newly observed task T∗ = {S∗,X∗} the loss function LS∗(fθ) is calculated first using eq. equation 2. Next, the parameters θ′∗ for task T∗ are calculated from eq. equation 1. The final predictions for query examples X∗ are performed by the model fθ′∗ , where for selected query example xq ∈ X∗ we have p(y|xq, θ ′ ∗) = fθ′∗(xq, θ ′ ∗).
The main limitation of the approach is that it produces the general weights for all possible tasks, and the adjustment is performed via a gradient-based approach performed on the support set. For some non-trivial challenging tasks, the dedicated parameters θ′∗ may be located far from the base weights, θ. Consequently, the adaptation procedure would require significantly more gradient steps, the training may be unstable, and the model will tend to overfit to support set. To overcome this limitation, we propose to replace the gradient-based adaption with the Hypernetwork approach, where the update step is returned by an additional deep model that extracts the information from the support set (see toy example in Fig. 3).
3.2 HYPERMAML - OVERVIEW
We introduce our HyperMAML – a model that utilizes Hypernetworks for modeling weights updates in the classical MAML algorithm. The main idea of the proposed updating scenario is to use information extracted from support examples and predictions given by universal weights to find optimal updates dedicated to a given task. Thanks to this approach, we can switch the classifier’s parameters between completely different tasks based on the support set and existing prediction of the universal weights.
The architecture of the HyperMAML is provided in Fig. 2. In this section we present the model for one-shot scenario, and further discuss how to extend it to Few-Shot problems. We aim at predicting the class distribution p(y|xq,S), assuming given single query example xq , and the set of support examples S. Following the idea from MAML we consider the parameterized function fθ, that models the discriminative distribution for the classes. In addition, in our architecture we distinguish the trainable encoding network E(·), that transforms data to low-dimensional representation. We postulate to calculate p(y|xq, θ′) = fθ′(eq), where eq is the query example xq transformed using encoder E(·), and θ′ represents the updated parameters for a considered task, θ′ = θ+∆θ. Compared to gradient-based adaptation step described by equation 1 used to calculate ∆θ we propose to predict the values using hypernetwork, directly from support set.
Each of the inputs from support set XS is transformed by Encoder E(·) in order to obtain lowdimensional matrix of embeddings ES = [eS,1, . . . , eS,K ]T. Next, the corresponding class labels for support examples, YS = [yS,1, . . . ,yS,K ]T are concatenated to the corresponding embeddings stored in the rows of matrix ES . In addition, we also calculate the predicted values for the examples from the support set using the general model fθ(ES) = ŶS , and also concatenate them to ES . The matrix transformed support inputs ES , together with true support labels YS , and corresponding predictions ŶS returned by general model are delivered as an input to the hypernetwork H(·) that returns the update ∆θ. The hypernetwork consists of fully-connected layers with ReLU activations – see section E.1 in the Appendix for details. The parameters for final target model are calculated with the following formula:
θ′ = θ +∆θ = θ +H(ES , ŶS ,YS). (5)
Practically, the Hypernetwork observes the support examples with the corresponding true values and decides how the global parameters θ should be adjusted to the considered task. In addition, the predictions from global model fθ are also delivered to the model in order to identify the misclassifications and try to correct them during the update state.
3.3 HYPERMAML - TRAINING
For training the model we assume that encoder E(·) is parametrized by γ, E := Eγ , and the hypernetwork H(·) by η, H := Hη. The training procedure is described in Algorithm 2. First, we sample the batch of tasks B from the given dataset D. Next, for each task Ti in batch B we calculate the update ∆θi using the support set Si, and provide the updated parameters θ′ according to the rule given by eq. equation 5. Finally, the objective to train the parameters of the system is calculated using the query sets from the batch tasks Ti:
LHyperMAML(fθ) = ∑ Ti∈B LQi(fθ′) = ∑ Ti∈B LQi(fθ+∆θ ), (6)
where LQi(fθ′) is given by eq. equation 2. The parameters of the encoder, hypernetwork, and global parameters θ represent the meta parameters of the system, and they are updated with stochastic gradient descent (SGD) by optimizing LHyperMAML(fθ).
Adaptation to the Few-Shot scenario. The proposed method can be easily extended for Few-Shot scenarios following the aggregation technique from (Sendera et al., 2022). For our approach, we aggregate the embedding values of the support examples from the same class using mean operation. In addition, the corresponding predictions within the class are also averaged and concatenated to the averaged per class embedding together with the true class label, and further processed via the hypernetwork.
Warming-up universal weights In practice, it is not trivial to initialize the universal weights of HyperMAML. Classical initialization does not allow to update the universal weights and only the Hypernetwork’s parameters are changing. To solve this problem we use a smooth transition from gradient to Hypernetwork update:
Algorithm 2 HyperMAML Require: D = {Tn}Nn=1: set of training tasks Require: β: step size hyper parameter
1: randomly initialize θ, γ, η 2: while not done do 3: Sample batch of tasks B from D 4: for each task Ti = {Si,Qi} from batch B do 5: Compute adapted parameters θ′i from Si using formula given by eq. equation 5 6: end for 7: Calculate the loss LHyperMAML(fθ) given by eq. equation 6. 8: θ ← θ − β∇θLHyperMAML(fθ) ▷ Update the global target parameters θ 9: η ← η − β∇ηLHyperMAML(fθ) ▷ Update parameters of the hypernetwork Hη 10: γ ← γ − β∇γLHyperMAML(fθ) ▷ Update the parameters of the Encoder Eγ
θ′ = θ + λ ·H(ES , ŶS ,YS)− (1− λ) · α∇θLSi(fθ) (7)
where λ is changing from zero to one in a few initial training epochs.
4 EXPERIMENTS
In the typical Few-Shot learning setting, making a valuable and fair comparison between proposed models is often complicated because of the existence of the significant differences in architectures and implementations of known methods. In order to limit the influence of the deeper backbone (feature extractor) architectures, we follow the unified procedure proposed by (Chen et al., 2019) 1.
In all of the reported experiments, the tasks consist of 5 classes (5-way) and 1 or 5 support examples (1 or 5-shot). Unless indicated otherwise, all compared models use a known and widely utilized backbone consisting of four convolutional layers (each consisting of a 2D convolution, a batch-norm layer, and a ReLU non-linearity; each layer consists of 64 channels (Chen et al., 2019). The models are trained from scratch, except for the models trained on mini-ImageNet where they are initialized with a pretrained backbone, following (Qiao et al., 2017; Rusu et al., 2019; Ye et al., 2021). In all experiments, the query set of each task consists of 16 samples for each class (80 in total). We split the datasets into the standard train, validation, and test class subsets, used commonly in the literature (Ravi & Larochelle, 2017; Chen et al., 2019; Patacchiola et al., 2020). We provide the additional training details in Section E of the Appendix.
4.1 CLASSIFICATION
First, we consider the classical Few-Shot learning scenario. We benchmark the performance of the HyperMAML and other methods on two challenging and widely considered datasets: Caltech-USCD Birds (CUB) (Wah et al., 2011) and mini-ImageNet (Ravi & Larochelle, 2017). In case of the mini-ImageNet dataset we initialize the backbone with pretrained weights, following (Qiao et al., 2017; Ye et al., 2021). We compare HyperMAML to a number of MAML-related and Hypernetworkbased methods, as well as the current state-of-the-art algorithms in the tasks of 1-shot and 5-shot classification, and report the results in Table 1. We report a comparison to a wider pool of few-shot learning methods, as well as the results of models utilizing a larger backbone in the Tables 6 and 7 in the Appendix.
In the 1-shot scenario, HyperMAML yields top performing results (66.11%) on the CUB dataset, inferior only to FEAT (Ye et al., 2021) (68.87%). On the mini-ImageNet dataset, HyperMAML is among the five best methods, achieving the accuracy of 53.41%. In the 5-shot setting, HyperMAML is among the top-3 best performing models achieving both on the CUB and mini-ImageNet datasets,
1An anonymized version of our code is available at https://anonymous.4open.science/r/ few-shot-hypernets-public-DB4F. We shall release the code with our experiments after the end of the review period.
achieving 78.89% and 68.76% accuracy, with FEAT (Ye et al., 2021) and HyperShot (Sendera et al., 2022) outperforming it by a small margin.
The obtained results show that HyperMAML achieves performance better or comparable to a variety of Few-Shot learning methods, in particular MAML (Finn et al., 2017), as well as techniques which derive from it (Antoniou et al., 2018; Rajeswaran et al., 2019; Fan et al., 2021; Yoon et al., 2018; Ye & Chao, 2021).
4.2 CROSS-DOMAIN ADAPTATION
In the cross-domain adaptation setting, the model is evaluated on tasks coming from a different distribution than the one it had been trained on. Therefore, such a task is more challenging than standard classification and is a plausible indicator of a model’s ability to generalize. In order to benchmark the performance of HyperMAML in cross-domain adaptation, we combine two datasets so that the training fold is drawn from the first dataset and validation and the testing fold – from another one. We report the results in Table 2. In the task of 1-shot Omniglot→EMNIST classification, HyperMAML achieves the second-best result (79.84%), with HyperShot+finetuning (80.65%) being the top one. Compared to the other methods, we observe relatively smaller performance growth as more data becomes available. In the 5-shot Omniglot→EMNIST classification task HyperMAML yields comparable results (89.22%) to HyperShot (Sendera et al., 2022) (90.81%) and DKT (Patacchiola et al., 2020) (90.30%), which are the state-of-the-art in this setting. In the most challenging task of mini-ImageNet→CUB classification, our method performs comparably to baseline methods such as MAML, ProtoNet and Matching Net (Finn et al., 2017; Vinyals et al., 2016; Snell et al., 2017), particularly in the 1-shot setting.
4.3 PARAMETER UPDATE MAGNITUDE
Next, we consider the ability of HyperMAML to produce the correct weight updates. One of the drawbacks of MAML is that, in practice, gradient updates change weights very slowly, especially when meta tasks require completely different weights (see Fig. 3). On the other hand, Hypernetworks can produce significant updates. To verify such behaviour on a real, non-trivial dataset, we calculate the norm of classifier weight updates in MAML and HyperMAML trained for Omniglot→EMNIST classification and report the results in Table 3.
As we can see, HyperMAML produces larger updates, which, combined with higher accuracy than MAML (see Table 2), suggests faster and more accurate convergence.
In the case of classical MAML, there exist few modifications in updating procedures (inner loop) like MAML++ (Antoniou et al., 2018) or Meta-SGD (Li et al., 2017). However, such modifications are required in classical MAML since few gradient updates do not always guarantee convergence. On the contrary, HyperMAML provides a possible solution to the problem by a novel update that is not generated directly by gradient descent, but rather by a forward pass of a Hypernetwork.
4.4 COMPUTATIONAL EFFICIENCY
Finally, we verify the hypothesis that HyperMAML offers an increased computational efficiency compared to MAML. To this end, we measure the times of processing the entire Omniglot→ EMNIST test dataset (600 tasks in total) by MAML with different numbers of gradient steps and HyperMAML and report the results in Table 4, and for other datasets in Table 8 in the Appendix.
We find that processing the test data with HyperMAML takes approximately the same time as using MAML with just 2 gradient updates. We also note that even given the budget of 100 gradient updates, MAML never matches the accuracy achieved by a single update generated by HyperMAML.
5 CONCLUSIONS
In this work, we introduced HyperMAML – a novel Meta-Learning algorithm strongly motivated by MAML (Finn et al., 2017). The crucial difference between the two methods lies in the fact that in HyperMAML the update of the model is given not by the gradient optimization, but by the trainable Hypernetwork. Consequently, HyperMAML is more computationally efficient than MAML, as during adaptation it performs only a single parameter update. Moreover, HyperMAML is not only more biologically feasible as it does not use backpropagation, but it can adapt easier to different settings and has a smaller number of hyperparameters. Our experiments show that HyperMAML outperforms the classical MAML in a number of standard Few-Shot learning benchmarks and achieves results better or comparable to various other state-of-the-art methods in most cases.
Our results indicate that Few-Shot weight adaptation can be performed directly, without calculating the actual loss or gradients – a more biologically plausible learning method (Lillicrap et al., 2020; Song et al., 2020; Whittington & Bogacz, 2019). Moreover, HyperMAML adapts to new tasks quicker than MAML, which needs to be tuned with many gradient updates. Thus, HyperMAML is a step toward more efficient and environment-friendly Meta-Learning techniques.
B ABLATION STUDY OF MECHANISMS USED IN HYPERMAML
In this section, we present two mechanisms we utilize when training HyperMAML.
Switching mechanism is a smooth transition between training the convolutional encoder through MAML and HyperMAML objective. We consider the MAML warm-up as a starting point of the training loop and then smoothly move towards HyperMAML training. During training, we define two "milestone" epochs (see Section E.3, between which the transition occurs. During the transition, we continuously decrease the participation of the MAML objective in the training process. It is done by multiplying MAML loss by p, ranging from 1.0 to 0.0 for a given number of epochs, and multiplying the HyperMAML loss by 1− p. Our motivation for this mechanism is to train a better universal optimizing the MAML objective during the warm-up part of the training and then Switch to the HyperMAML objective gradually.
Enhancement of the embeddings is another mechanism in our framework. When preparing the input to the Hypernetwork from the support set, we first obtain support embeddings from the encoder. Then, we forward those embeddings through the universal classifier and obtain its predictions for support examples. We concatenate those predictions to the support embeddings, as well as their respective ground-truth labels. The whole process is visualized in Figure 4. Our motivation to perform such an operation is to give the Hypernetwork the information about the current decision of the classifier, to better estimate the updates to its weights. We note that even though the Hypernetwork generates the weights of the classifier based on the enhanced support embeddings, the generated downstream classifier does not use enhancements when processing the query set. Instead, it only processes the raw embeddings obtained from the encoder (see Figure 2 from the main paper).
We perform an ablation study of both mechanisms on the task 5-shot 1-way Omniglot→EMNIST classification. The results, reported in Table 5, indicate that both mechanisms utilized individually improve the performance of HyperMAML, and the combination of the two yields the best results.
C FULL CLASSIFICATION RESULTS
We provide an expanded version of Table 1 from the main paper, with numerous additional baseline methods – see Table 6. We also report the performance of HyperMAML, as well as several baseline methods trained with ResNet-12 He et al. (2015) as backbone on the mini-ImageNet dataset in Table 7.
D COMPUTATIONAL EFFICIENCY – RESULTS ON NATURAL IMAGE DATASETS
In this section, we perform similar experiments to those described in Section 4.3 of the main paper and measure the inference time of HyperMAML and MAML with different numbers of gradient steps on CUB and mini-ImageNet datasets. We summarize the results in Table 4. Similarly to the benchmark performed on smaller images from the Omniglot→ EMNIST dataset, the inference time of HyperMAML is comparable to MAML with two gradient steps. Likewise, MAML never achieves accuracy higher than HyperMAML.
As opposed to Section 4.3 of the main paper, we report the accuracies of MAML only up to seven gradient steps. This is due to an insufficient amount of GPU memory available for making more steps in the MAML implementation we used (Chen et al., 2019). We also note that the accuracies of MAML reported here are significantly higher than the ones reported in Table 1 of the main paper. The MAML accuracies from that table were previously reported in the literature (Patacchiola et al., 2020), whereas the results in Table 4 have been obtained by the MAML implementation in our codebase (Chen et al., 2019).
E TRAINING DETAILS
In this section, we present details of the training and architecture overview.
E.1 ARCHITECTURE OVERVIEW
The architecture of HyperMAML consists of the following parts (as outlined in the Figure 2 in the main body of the work):
Encoder For each experiment described in the main body of this work, we utilize a shallow convolutional encoder (feature extractor), commonly used in the literature (Finn et al., 2017; Chen et al., 2019; Patacchiola et al., 2020). This encoder consists of four convolutional layers, each
consisting of a convolution, batch normalization, and ReLU nonlinearity. Each of the convolutional layers has an input and output size of 64, except for the first layer, where the input size is equal to the number of image channels. We also apply max-pooling between each convolution, by which the resolution of the processed feature maps is decreased by half. The output of the encoder is flattened to process it in the next layers.
For the mini-ImageNet dataset we additionally test the performance of HyperMAML with a larger backbone – namely ResNet-12 He et al. (2015).
Hypernetwork The Hypernetwork transforms the enhanced embeddings of the support examples of each class in a task into the updates for the portion of classifier weights predicting that class. It consists of two or three fully-connected layers with ReLU activation function between each consecutive pair of layers. In the hypernetwork, we use a hidden size of 256 or 512.
Classifier The universal classifier is a single fully-connected layer with the input size equal to the encoder embedding size and the output size equal to the number of classes. When using the strategy with embeddings enhancement, we freeze the classifier to get only the information about the behavior of the classifier, this means we do not calculate the gradient for the classifier in this step of the forward pass. Instead, gradient calculation for the classifier takes place during the classification of the query data.
E.2 TRAINING DETAILS
In all of the experiments described in the main body of this work, we utilize the switch and the embedding enhancement mechanisms. During training, we use the Adam optimizer (Kingma & Ba, 2014) and the MultiStepLR learning rate scheduler with the decay of 0.3 and learning rate starting from 0.01 or 0.001. We train HyperMAML for 4000 epochs on all the datasets, save for the simpler Omniglot→ EMNIST classification task, where we train for 2048 epochs instead. For the mini-ImageNet experiments we follow a strategy of using a pre-trained backbone, suggested by (Qiao et al., 2017; Rusu et al., 2019; Ye et al., 2021; Ye & Chao, 2021). More specifically, at the beginning of the training we initialize the Encoder of HyperMAML with weights of backbone pretrained for classification of all 64 classes from the mini-ImageNet training set. In practice, for consistency, we use the identical set of pretrained weights as Ye et al. (2021).
E.3 HYPERPARAMETERS
Below, we outline the hyperparameters of architecture and training procedures used in each experiment.
F IMPLEMENTATION DETAILS
We implement HyperMAML using the PyTorch framework (Paszke et al., 2019). We shall release the code publicly after the end of the review period. Each experiment described in this work was run on a single NVIDIA RTX 2080 GPU. | 1. What is the focus and contribution of the paper on Model-Agnostic Meta-Learning?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its architecture and limitations?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any concerns or questions regarding the motivation, experiment setup, and hyperparameter selection for HyperMAML? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper proposes an algorithm for training a hypernetwork that predicts the update step in Model-Agnostic Meta-Learning (MAML). The key idea of HyperMAML is to explicitly construct the hypernetwork and directly output the updated parameter. Empirically, the proposed method achieves competitive performance on CUB and mini-ImageNet compared to the baseline methods.
Strengths And Weaknesses
The paper is clearly written and the contribution of the paper is relevant to the ICLR community. However, I have the following concerns:
There is not much discussion on how the hypernetworks were designed and the authors did not provide any theoretical motivation or justification for their proposed architecture.
The paper does not sufficiently discuss the limitation of HyperMAML. Depending on the base network, HyperMAML may require significant computation and memory overheads. Moreover, the authors do not discuss additional hyperparameters to HyperMAML (e.g., are we using the same learning rate for the encoder, hypernetwork, and the shared weight?).
The results are slightly worse than the current state-of-the-art. Moreover, the paper does not discuss the computation and memory overhead in training HyperMAML compared to other baseline methods.
Clarity, Quality, Novelty And Reproducibility
Originality
The hypernetwork approaches have been previously proposed in various other settings such as continual learning [1] and meta-learning [2].
The proposed method HyperMAML is an extension of these hypernetwork approaches in the context of MAML and I believe that technical novelty is limited.
Clarity
The paper is clear and well-written.
However, the motivation of HyperMAML in the introduction is not convincing and I elaborated on this in the additional comments below.
Reproducibility
The authors did not provide details of the experiment setup and I believe that it would be difficult to reproduce the experiments at the current state.
The authors also did not describe how hyperparameters for HyperMAML were selected.
Additional Comments
I felt that the initial motivation in the introduction (regarding neuroscience) is not very convincing. Arguing that hypernetworks are more biologically feasible as they can explicitly learn the update step is an overstretch. In fact, these hypernetworks themselves are trained with gradient-based optimizers, so I don’t understand how it becomes more biologically feasible. If the authors want to argue that HyperMAML is a more biologically feasible solution, I believe that there should be much more justification.
What happens to the encoder architecture when
K
is not fixed? Does it support flexible input dimensions?
When concatenating the embedding and the target, two vectors would have a different scale. Do you normalize this vector when training the hypernetwork?
I believe that the architecture of hypernetwork is one of the core elements in HyperMAML and should be discussed in more detail (and in the main text). Furthermore, how large are this encoder and hypernetwork? If we have fully-connected layers in the base network, wouldn’t the output dimension be extremely large and infeasible to compute?
Minor Comments
Figure 1 was difficult to read when printed on paper (due to low resolution).
I believe that Fig. 3 does not appear in the main text and it should reference the Appendix in Section 3.1.
[1] Von Oswald, J., Henning, C., Sacramento, J., & Grewe, B. F. (2019). Continual learning with hypernetworks. arXiv preprint arXiv:1906.00695.
[2] Zhao, D., von Oswald, J., Kobayashi, S., Sacramento, J., & Grewe, B. F. (2020). Meta-learning via hypernetworks. |
ICLR | Title
HyperMAML: Few-Shot Adaptation of Deep Models with Hypernetworks
Abstract
The aim of Few-Shot learning methods is to train models which can easily adapt to previously unseen tasks, based on small amounts of data. One of the most popular and elegant Few-Shot learning approaches is Model-Agnostic Meta-Learning (MAML). The main idea behind this method is to learn the general weights of the meta-model, which are further adapted to specific problems in a small number of gradient steps. However, the model’s main limitation lies in the fact that the update procedure is realized by gradient-based optimisation. In consequence, MAML cannot always modify weights to the essential level in one or even a few gradient iterations. On the other hand, using many gradient steps results in a complex and time-consuming optimization procedure, which is hard to train in practice, and may lead to overfitting. In this paper, we propose HyperMAML, a novel generalization of MAML, where the training of the update procedure is also part of the model. Namely, in HyperMAML, instead of updating the weights with gradient descent, we use for this purpose a trainable Hypernetwork. Consequently, in this framework, the model can generate significant updates whose range is not limited to a fixed number of gradient steps. Experiments show that HyperMAML outperforms MAML in most cases and performs comparably to other state-of-the-art techniques in a number of standard Few-Shot learning benchmarks.
1 INTRODUCTION
In the typical Few-Shot learning setting, the aim is to adapt to new tasks under the assumption that only a few examples are given. As we know, people typically learn new tasks easily by using only a few training examples. On the contrary, a standard deep neural network must be trained on an extensive amount of data to obtain a similar accuracy. Thus, the aim of Few-Shot learning models is to bring neural networks closer to the human brain’s capabilities. The most famous and, in our opinion, the most elegant approach to Few-Shot learning is Model-Agnostic Meta-Learning (MAML) (Finn et al., 2017), where the model is trained to adapt universal weights to new Few-Shot learning tasks quickly. It seems that the brain’s neural networks can adapt to new tasks too, by applying the fact that during the process of evolution, some of its parts have developed universal weights which are easily adaptable to typical tasks we encounter in real life. Thus, the idea behind MAML gives us a possible insight into the working of the brain.
The fascinating factor of human intelligence is that the human learning process, although still not understood, is clearly not based on the gradient descent algorithm, as we cannot in general backpropagate the information (Lillicrap et al., 2020; Song et al., 2020; Whittington & Bogacz, 2019). Thus, from the biological point of view, the main limitation of MAML is the fact that it uses the gradient descent method for weight updates. The main research problem that we set for ourselves is whether one can modify MAML to be more biologically feasible, i.e. keep its ability to find universal weight but remove the necessity of using gradient-based update methods.
We solve this problem by constructing HyperMAML, a model which replaces the gradient optimization in the update of weights by trainable update procedure with the use of the Hypernetwork paradigm. Hypernetworks, introduced in (Ha et al., 2016) are defined as neural models that generate weights for a separate target network solving a specific task. In our model, HyperMAML, the Hypernetwork aggregates the information from the support set and produces an update to the main model. Thanks to such an approach, we can create various types of updates that are not limited to a
few gradient steps. Moreover, hypernetworks have previously been used as models for biological information processing, which suggests that such models are more biologically plausible than the gradient-based techniques such as MAML (Segovia-Juarez & Conrad, 1999). In practice, MAML works when there exist universal weights that are close enough to the optimal solution for each task. To visualize such a situation, we present a simple 2D example, where a single gradient update fails to sufficiently adapt the model to a given task – see Fig. 3. We cannot effectively switch weight in one gradient step. On the other hand, when we use many gradient steps, we obtain a complex optimization procedure that uses an inner and outer loop. Such a procedure can be seen as second-order optimization, which is complex to train (Finn et al., 2017). Contrary to MAML we do not need an inner loop in the optimization procedure, and consequently, we do not have second-order optimization. We also reduce the number of hyperparameters, which are used in the inner loop of MAML approach, which would need to be tuned in a grid search. As a result, our algorithm obtains better results than the classical MAML algorithm and produces results comparable to other state-of-the-art algorithms.
The contributions of our work can be summarized as follows:
• We introduce HyperMAML, a novel approach to the Few-Shot learning problem by aggregating information from the support set and directly producing weights updates.
• In HyperMAML, we do not use loss calculation or gradient backpropagation for the update to the new task, thus making the model more biologically feasible and computationally efficient.
• We significantly increase the update ability compared to the classical MAML algorithm, as evidenced by the increased accuracy in numerous benchmarks we perform.
2 RELATED WORK
The problem of Meta-Learning and Few-Shot learning (Hospedales et al., 2020; Schmidhuber, 1992; Bengio et al., 1992) has received a growing amount of attention from the scientific community over the recent years, with the abundance of methods emerging as a result. The idea of HyperMAML is influenced by two kinds of such methods:
Model-based methods aim to adapt to novel tasks quickly by utilizing mechanisms such as memory (Ravi & Larochelle, 2017; Santoro et al., 2016; Mishra et al., 2018; Zhen et al., 2020), Gaussian Processes (Rasmussen, 2003; Patacchiola et al., 2020; Wang et al., 2021; Sendera et al., 2021), or generating fast weights based on the support set with set-to-set architectures (Qiao et al., 2017; Bauer et al., 2017; Ye et al., 2021; Zhmoginov et al., 2022). Other approaches combine weight generators with gradient-based optimizers by choosing target weights from a set of templates (Zhao et al., 2020) or optimizing low-dimensional embeddings which condition the target weight generator (Rusu et al., 2019). The fast weights approaches can be interpreted as using Hypernetworks (Ha et al., 2016) – models which learn to generate the parameters of neural networks performing the designated tasks.
Similarly, HyperMAML utilizes a Hypernetwork to generate weights updates for performing specific tasks. The key difference is that in HyperMAML, the Hypernetwork is not the sole source of model weights. Instead, following (Finn et al., 2017), HyperMAML maintains a set of universal weights and uses the hypernetwork to generate the updates to those weights for novel tasks.
Optimization-based methods, such as MetaOptNet (Lee et al., 2019) are based on the idea of an optimization process over the support set within the Meta-Learning framework. Arguably, the most popular of this family of methods is Model-Agnostic Meta-Learning (MAML) (Finn et al., 2017), which inspired a multitude of research and numerous extensions to the original algorithm. This includes various techniques for stabilizing its training and improving performance, such as Multi-Step Loss Optimization, and scheduling the learning rate of the meta-optimizer (Antoniou et al., 2018) , using the Bayesian variant of MAML (Yoon et al., 2018), or making MAML permutation-invariant Ye & Chao (2021).
Due to a need for calculating second-order derivatives when computing the gradient of the metatraining loss, training the classical MAML introduces a significant computational overhead. The authors show that in practice the second-order derivatives can be omitted at the cost of small gradient estimation error and minimally reduced accuracy of the model (Finn et al., 2017; Nichol et al., 2018). Methods such as iMAML and Sign-MAML propose to solve this issue with implicit gradients or Sign-SGD optimization (Rajeswaran et al., 2019; Fan et al., 2021). The optimization process can also be improved by training not only the base initialization of the model but also the optimizer itself – namely, training a neural network that transforms gradients calculated w.r.t. loss of the support set predictions into weight updates (Munkhdalai & Yu, 2017; Munkhdalai et al., 2018; Li et al., 2017; Rajasegaran et al., 2020).
HyperMAML shares a key characteristic with the optimization-based methods – namely, it also utilizes a base set of weights, which are updated to obtain a model fit for a given task. The key difference between HyperMAML and MAML is that while MAML adapts to novel tasks through multiple steps of gradient-based optimization, HyperMAML generates the updates in a single step using a Hypernetwork. This makes HyperMAML more similar to methods like (Li et al., 2017; Munkhdalai & Yu, 2017), which generate weight updates through trained meta-optimizers. However, contrary to those approaches, in HyperMAML the Hypernetwork predicts the weight updates based on (i) latent representation of the support set, (ii) predictions of the base model for the support set, (iii) ground-truth labels of the support examples (see Fig. 2). Thus HyperMAML does not require calculating either the loss function or its gradients during generating of the task-specific weight updates, making it more computationally efficient.
3 HYPERMAML: HYPERNETWORK FOR FEW-SHOT LEARNING
In this section, we present our HyperMAML model for Few-Shot learning. First, we start by presenting background and notations for Few-Shot learning. Then we describe how the MAML algorithm works. Finally, we present HyperMAML, which can be understood as an extension of the classical MAML.
Algorithm 1 MAML - Model-Agnostic Meta-Learning (Finn et al., 2017) Require: D = {Tn}Nn=1: set of training tasks Require: α, β: step size hyper parameters
1: randomly initialize θ 2: while not done do 3: Sample batch of tasks B from D 4: for each task Ti = {Si,Qi} from batch B do 5: Evaluate ∇θLSi(fθ) with respect to examples from Si given loss by equation 2 6: Compute adapted parameters θ′i with gradient descent using formula given by eq. equa-
tion 1 7: end for 8: Update the global parameters of the model θ with formula given by eq. equation 4.
3.1 BACKGROUND
The terminology describing the Few-Shot learning setup is dispersive due to the colliding definitions used in the literature. Here, we use the nomenclature derived from the Meta-Learning literature, which is the most prevalent at the time of writing. Let S = {(xl,yl)}Ll=1 be a support-set containing input-output pairs, with L examples with the equal class distribution. In the one-shot scenario, each class is represented by a single example, and L = K, where K is the number of the considered classes in the given task. Whereas, for Few-Shot scenarios, each class usually has from 2 to 5 representatives in the support set S. Let Q = {(xm,ym)}Mm=1 be a query-set (sometimes referred to in the literature as a target-set), with M examples, where M is typically one order of magnitude greater than K. For clarity of notation, the support and query sets are grouped in a task T = {S,Q}. During the training stage, the models for Few-Shot applications are fed by randomly selected examples from training set D = {Tn}Nn=1, defined as a collection of such tasks.
During the inference stage, we consider task T∗ = {S∗,X∗}, where S∗ is a support set with the known class values for a given task, and X∗ is a set of query (unlabeled) inputs. The goal is to predict the class labels for query inputs x ∈ X∗, assuming support set S∗ and using the model trained on D.
Model-Agnostic Meta-Learning (MAML) is one of the current standard algorithms for Few-Shot learning, which learns the parameters of a model so that it can adapt to a new task in a few gradient steps.
We consider a model represented by a function fθ with parameters θ. In the Few-Shot problems fθ models discriminative probabilities for the classes, fθ(x) = p(y|x, θ). The standard MAML model is trained with the procedure given by Algorithm 1. In each of the training iterations the batch of tasks B is sampled from D. Further, for each task Ti = {Si,Qi} from B, MAML adapts the model’s parameters θ′i that are specific for a given task. The actual values of θ ′ i are calculated using one or more gradient descent updates. In the simplest case of one gradient iteration, the parameters are updated as follows:
θ′i = θ − α∇θLSi(fθ), (1)
where α is the step size that may be fixed as a hyperparameter or meta-learned, and the loss function for a set of observations Z is defined as LZ for the few shot scenario is represented as a simple cross-entropy:
LZ(fθ) = ∑
(xi,l,yi,l)∈Z
K∑ k=1 −yki,l log fθ,k(xi,j), (2)
where fθ,k(xi,j) denotes k-th output of the model fθ, for a given input xi,l, and yi,l is corresponding class in one-hot coding. For simplicity of notation, we will consider one gradient update for the rest of this section, but using multiple gradient updates is a straightforward extension. After calculating the tasks-specific updates θ′i the general model parameters are trained by optimizing for the performance
of fθ′i with respect to θ across tasks from batch B. More concretely, the meta-objective used to train the general parameters of the models is as follows:
LMAML(fθ) = ∑ Ti∈B LQi(fθ′) = ∑ Ti∈B LQi(fθ−α∇θLSi (fθ)), (3)
Note that the meta-optimization is performed over the model parameters θ, whereas the objective is computed using the updated model parameters θ′. In effect, our proposed method aims to optimize the model parameters such that one or a small number of gradient steps on a new task will produce maximally effective behavior on that task.
The meta-optimization across tasks is performed via stochastic gradient descent (SGD) such that the model parameters θ are updated as follows:
θ ← θ − β∇θLMAML(fθ), (4)
where β is the meta step size.
During the inference stage, in order to perform predictions for newly observed task T∗ = {S∗,X∗} the loss function LS∗(fθ) is calculated first using eq. equation 2. Next, the parameters θ′∗ for task T∗ are calculated from eq. equation 1. The final predictions for query examples X∗ are performed by the model fθ′∗ , where for selected query example xq ∈ X∗ we have p(y|xq, θ ′ ∗) = fθ′∗(xq, θ ′ ∗).
The main limitation of the approach is that it produces the general weights for all possible tasks, and the adjustment is performed via a gradient-based approach performed on the support set. For some non-trivial challenging tasks, the dedicated parameters θ′∗ may be located far from the base weights, θ. Consequently, the adaptation procedure would require significantly more gradient steps, the training may be unstable, and the model will tend to overfit to support set. To overcome this limitation, we propose to replace the gradient-based adaption with the Hypernetwork approach, where the update step is returned by an additional deep model that extracts the information from the support set (see toy example in Fig. 3).
3.2 HYPERMAML - OVERVIEW
We introduce our HyperMAML – a model that utilizes Hypernetworks for modeling weights updates in the classical MAML algorithm. The main idea of the proposed updating scenario is to use information extracted from support examples and predictions given by universal weights to find optimal updates dedicated to a given task. Thanks to this approach, we can switch the classifier’s parameters between completely different tasks based on the support set and existing prediction of the universal weights.
The architecture of the HyperMAML is provided in Fig. 2. In this section we present the model for one-shot scenario, and further discuss how to extend it to Few-Shot problems. We aim at predicting the class distribution p(y|xq,S), assuming given single query example xq , and the set of support examples S. Following the idea from MAML we consider the parameterized function fθ, that models the discriminative distribution for the classes. In addition, in our architecture we distinguish the trainable encoding network E(·), that transforms data to low-dimensional representation. We postulate to calculate p(y|xq, θ′) = fθ′(eq), where eq is the query example xq transformed using encoder E(·), and θ′ represents the updated parameters for a considered task, θ′ = θ+∆θ. Compared to gradient-based adaptation step described by equation 1 used to calculate ∆θ we propose to predict the values using hypernetwork, directly from support set.
Each of the inputs from support set XS is transformed by Encoder E(·) in order to obtain lowdimensional matrix of embeddings ES = [eS,1, . . . , eS,K ]T. Next, the corresponding class labels for support examples, YS = [yS,1, . . . ,yS,K ]T are concatenated to the corresponding embeddings stored in the rows of matrix ES . In addition, we also calculate the predicted values for the examples from the support set using the general model fθ(ES) = ŶS , and also concatenate them to ES . The matrix transformed support inputs ES , together with true support labels YS , and corresponding predictions ŶS returned by general model are delivered as an input to the hypernetwork H(·) that returns the update ∆θ. The hypernetwork consists of fully-connected layers with ReLU activations – see section E.1 in the Appendix for details. The parameters for final target model are calculated with the following formula:
θ′ = θ +∆θ = θ +H(ES , ŶS ,YS). (5)
Practically, the Hypernetwork observes the support examples with the corresponding true values and decides how the global parameters θ should be adjusted to the considered task. In addition, the predictions from global model fθ are also delivered to the model in order to identify the misclassifications and try to correct them during the update state.
3.3 HYPERMAML - TRAINING
For training the model we assume that encoder E(·) is parametrized by γ, E := Eγ , and the hypernetwork H(·) by η, H := Hη. The training procedure is described in Algorithm 2. First, we sample the batch of tasks B from the given dataset D. Next, for each task Ti in batch B we calculate the update ∆θi using the support set Si, and provide the updated parameters θ′ according to the rule given by eq. equation 5. Finally, the objective to train the parameters of the system is calculated using the query sets from the batch tasks Ti:
LHyperMAML(fθ) = ∑ Ti∈B LQi(fθ′) = ∑ Ti∈B LQi(fθ+∆θ ), (6)
where LQi(fθ′) is given by eq. equation 2. The parameters of the encoder, hypernetwork, and global parameters θ represent the meta parameters of the system, and they are updated with stochastic gradient descent (SGD) by optimizing LHyperMAML(fθ).
Adaptation to the Few-Shot scenario. The proposed method can be easily extended for Few-Shot scenarios following the aggregation technique from (Sendera et al., 2022). For our approach, we aggregate the embedding values of the support examples from the same class using mean operation. In addition, the corresponding predictions within the class are also averaged and concatenated to the averaged per class embedding together with the true class label, and further processed via the hypernetwork.
Warming-up universal weights In practice, it is not trivial to initialize the universal weights of HyperMAML. Classical initialization does not allow to update the universal weights and only the Hypernetwork’s parameters are changing. To solve this problem we use a smooth transition from gradient to Hypernetwork update:
Algorithm 2 HyperMAML Require: D = {Tn}Nn=1: set of training tasks Require: β: step size hyper parameter
1: randomly initialize θ, γ, η 2: while not done do 3: Sample batch of tasks B from D 4: for each task Ti = {Si,Qi} from batch B do 5: Compute adapted parameters θ′i from Si using formula given by eq. equation 5 6: end for 7: Calculate the loss LHyperMAML(fθ) given by eq. equation 6. 8: θ ← θ − β∇θLHyperMAML(fθ) ▷ Update the global target parameters θ 9: η ← η − β∇ηLHyperMAML(fθ) ▷ Update parameters of the hypernetwork Hη 10: γ ← γ − β∇γLHyperMAML(fθ) ▷ Update the parameters of the Encoder Eγ
θ′ = θ + λ ·H(ES , ŶS ,YS)− (1− λ) · α∇θLSi(fθ) (7)
where λ is changing from zero to one in a few initial training epochs.
4 EXPERIMENTS
In the typical Few-Shot learning setting, making a valuable and fair comparison between proposed models is often complicated because of the existence of the significant differences in architectures and implementations of known methods. In order to limit the influence of the deeper backbone (feature extractor) architectures, we follow the unified procedure proposed by (Chen et al., 2019) 1.
In all of the reported experiments, the tasks consist of 5 classes (5-way) and 1 or 5 support examples (1 or 5-shot). Unless indicated otherwise, all compared models use a known and widely utilized backbone consisting of four convolutional layers (each consisting of a 2D convolution, a batch-norm layer, and a ReLU non-linearity; each layer consists of 64 channels (Chen et al., 2019). The models are trained from scratch, except for the models trained on mini-ImageNet where they are initialized with a pretrained backbone, following (Qiao et al., 2017; Rusu et al., 2019; Ye et al., 2021). In all experiments, the query set of each task consists of 16 samples for each class (80 in total). We split the datasets into the standard train, validation, and test class subsets, used commonly in the literature (Ravi & Larochelle, 2017; Chen et al., 2019; Patacchiola et al., 2020). We provide the additional training details in Section E of the Appendix.
4.1 CLASSIFICATION
First, we consider the classical Few-Shot learning scenario. We benchmark the performance of the HyperMAML and other methods on two challenging and widely considered datasets: Caltech-USCD Birds (CUB) (Wah et al., 2011) and mini-ImageNet (Ravi & Larochelle, 2017). In case of the mini-ImageNet dataset we initialize the backbone with pretrained weights, following (Qiao et al., 2017; Ye et al., 2021). We compare HyperMAML to a number of MAML-related and Hypernetworkbased methods, as well as the current state-of-the-art algorithms in the tasks of 1-shot and 5-shot classification, and report the results in Table 1. We report a comparison to a wider pool of few-shot learning methods, as well as the results of models utilizing a larger backbone in the Tables 6 and 7 in the Appendix.
In the 1-shot scenario, HyperMAML yields top performing results (66.11%) on the CUB dataset, inferior only to FEAT (Ye et al., 2021) (68.87%). On the mini-ImageNet dataset, HyperMAML is among the five best methods, achieving the accuracy of 53.41%. In the 5-shot setting, HyperMAML is among the top-3 best performing models achieving both on the CUB and mini-ImageNet datasets,
1An anonymized version of our code is available at https://anonymous.4open.science/r/ few-shot-hypernets-public-DB4F. We shall release the code with our experiments after the end of the review period.
achieving 78.89% and 68.76% accuracy, with FEAT (Ye et al., 2021) and HyperShot (Sendera et al., 2022) outperforming it by a small margin.
The obtained results show that HyperMAML achieves performance better or comparable to a variety of Few-Shot learning methods, in particular MAML (Finn et al., 2017), as well as techniques which derive from it (Antoniou et al., 2018; Rajeswaran et al., 2019; Fan et al., 2021; Yoon et al., 2018; Ye & Chao, 2021).
4.2 CROSS-DOMAIN ADAPTATION
In the cross-domain adaptation setting, the model is evaluated on tasks coming from a different distribution than the one it had been trained on. Therefore, such a task is more challenging than standard classification and is a plausible indicator of a model’s ability to generalize. In order to benchmark the performance of HyperMAML in cross-domain adaptation, we combine two datasets so that the training fold is drawn from the first dataset and validation and the testing fold – from another one. We report the results in Table 2. In the task of 1-shot Omniglot→EMNIST classification, HyperMAML achieves the second-best result (79.84%), with HyperShot+finetuning (80.65%) being the top one. Compared to the other methods, we observe relatively smaller performance growth as more data becomes available. In the 5-shot Omniglot→EMNIST classification task HyperMAML yields comparable results (89.22%) to HyperShot (Sendera et al., 2022) (90.81%) and DKT (Patacchiola et al., 2020) (90.30%), which are the state-of-the-art in this setting. In the most challenging task of mini-ImageNet→CUB classification, our method performs comparably to baseline methods such as MAML, ProtoNet and Matching Net (Finn et al., 2017; Vinyals et al., 2016; Snell et al., 2017), particularly in the 1-shot setting.
4.3 PARAMETER UPDATE MAGNITUDE
Next, we consider the ability of HyperMAML to produce the correct weight updates. One of the drawbacks of MAML is that, in practice, gradient updates change weights very slowly, especially when meta tasks require completely different weights (see Fig. 3). On the other hand, Hypernetworks can produce significant updates. To verify such behaviour on a real, non-trivial dataset, we calculate the norm of classifier weight updates in MAML and HyperMAML trained for Omniglot→EMNIST classification and report the results in Table 3.
As we can see, HyperMAML produces larger updates, which, combined with higher accuracy than MAML (see Table 2), suggests faster and more accurate convergence.
In the case of classical MAML, there exist few modifications in updating procedures (inner loop) like MAML++ (Antoniou et al., 2018) or Meta-SGD (Li et al., 2017). However, such modifications are required in classical MAML since few gradient updates do not always guarantee convergence. On the contrary, HyperMAML provides a possible solution to the problem by a novel update that is not generated directly by gradient descent, but rather by a forward pass of a Hypernetwork.
4.4 COMPUTATIONAL EFFICIENCY
Finally, we verify the hypothesis that HyperMAML offers an increased computational efficiency compared to MAML. To this end, we measure the times of processing the entire Omniglot→ EMNIST test dataset (600 tasks in total) by MAML with different numbers of gradient steps and HyperMAML and report the results in Table 4, and for other datasets in Table 8 in the Appendix.
We find that processing the test data with HyperMAML takes approximately the same time as using MAML with just 2 gradient updates. We also note that even given the budget of 100 gradient updates, MAML never matches the accuracy achieved by a single update generated by HyperMAML.
5 CONCLUSIONS
In this work, we introduced HyperMAML – a novel Meta-Learning algorithm strongly motivated by MAML (Finn et al., 2017). The crucial difference between the two methods lies in the fact that in HyperMAML the update of the model is given not by the gradient optimization, but by the trainable Hypernetwork. Consequently, HyperMAML is more computationally efficient than MAML, as during adaptation it performs only a single parameter update. Moreover, HyperMAML is not only more biologically feasible as it does not use backpropagation, but it can adapt easier to different settings and has a smaller number of hyperparameters. Our experiments show that HyperMAML outperforms the classical MAML in a number of standard Few-Shot learning benchmarks and achieves results better or comparable to various other state-of-the-art methods in most cases.
Our results indicate that Few-Shot weight adaptation can be performed directly, without calculating the actual loss or gradients – a more biologically plausible learning method (Lillicrap et al., 2020; Song et al., 2020; Whittington & Bogacz, 2019). Moreover, HyperMAML adapts to new tasks quicker than MAML, which needs to be tuned with many gradient updates. Thus, HyperMAML is a step toward more efficient and environment-friendly Meta-Learning techniques.
B ABLATION STUDY OF MECHANISMS USED IN HYPERMAML
In this section, we present two mechanisms we utilize when training HyperMAML.
Switching mechanism is a smooth transition between training the convolutional encoder through MAML and HyperMAML objective. We consider the MAML warm-up as a starting point of the training loop and then smoothly move towards HyperMAML training. During training, we define two "milestone" epochs (see Section E.3, between which the transition occurs. During the transition, we continuously decrease the participation of the MAML objective in the training process. It is done by multiplying MAML loss by p, ranging from 1.0 to 0.0 for a given number of epochs, and multiplying the HyperMAML loss by 1− p. Our motivation for this mechanism is to train a better universal optimizing the MAML objective during the warm-up part of the training and then Switch to the HyperMAML objective gradually.
Enhancement of the embeddings is another mechanism in our framework. When preparing the input to the Hypernetwork from the support set, we first obtain support embeddings from the encoder. Then, we forward those embeddings through the universal classifier and obtain its predictions for support examples. We concatenate those predictions to the support embeddings, as well as their respective ground-truth labels. The whole process is visualized in Figure 4. Our motivation to perform such an operation is to give the Hypernetwork the information about the current decision of the classifier, to better estimate the updates to its weights. We note that even though the Hypernetwork generates the weights of the classifier based on the enhanced support embeddings, the generated downstream classifier does not use enhancements when processing the query set. Instead, it only processes the raw embeddings obtained from the encoder (see Figure 2 from the main paper).
We perform an ablation study of both mechanisms on the task 5-shot 1-way Omniglot→EMNIST classification. The results, reported in Table 5, indicate that both mechanisms utilized individually improve the performance of HyperMAML, and the combination of the two yields the best results.
C FULL CLASSIFICATION RESULTS
We provide an expanded version of Table 1 from the main paper, with numerous additional baseline methods – see Table 6. We also report the performance of HyperMAML, as well as several baseline methods trained with ResNet-12 He et al. (2015) as backbone on the mini-ImageNet dataset in Table 7.
D COMPUTATIONAL EFFICIENCY – RESULTS ON NATURAL IMAGE DATASETS
In this section, we perform similar experiments to those described in Section 4.3 of the main paper and measure the inference time of HyperMAML and MAML with different numbers of gradient steps on CUB and mini-ImageNet datasets. We summarize the results in Table 4. Similarly to the benchmark performed on smaller images from the Omniglot→ EMNIST dataset, the inference time of HyperMAML is comparable to MAML with two gradient steps. Likewise, MAML never achieves accuracy higher than HyperMAML.
As opposed to Section 4.3 of the main paper, we report the accuracies of MAML only up to seven gradient steps. This is due to an insufficient amount of GPU memory available for making more steps in the MAML implementation we used (Chen et al., 2019). We also note that the accuracies of MAML reported here are significantly higher than the ones reported in Table 1 of the main paper. The MAML accuracies from that table were previously reported in the literature (Patacchiola et al., 2020), whereas the results in Table 4 have been obtained by the MAML implementation in our codebase (Chen et al., 2019).
E TRAINING DETAILS
In this section, we present details of the training and architecture overview.
E.1 ARCHITECTURE OVERVIEW
The architecture of HyperMAML consists of the following parts (as outlined in the Figure 2 in the main body of the work):
Encoder For each experiment described in the main body of this work, we utilize a shallow convolutional encoder (feature extractor), commonly used in the literature (Finn et al., 2017; Chen et al., 2019; Patacchiola et al., 2020). This encoder consists of four convolutional layers, each
consisting of a convolution, batch normalization, and ReLU nonlinearity. Each of the convolutional layers has an input and output size of 64, except for the first layer, where the input size is equal to the number of image channels. We also apply max-pooling between each convolution, by which the resolution of the processed feature maps is decreased by half. The output of the encoder is flattened to process it in the next layers.
For the mini-ImageNet dataset we additionally test the performance of HyperMAML with a larger backbone – namely ResNet-12 He et al. (2015).
Hypernetwork The Hypernetwork transforms the enhanced embeddings of the support examples of each class in a task into the updates for the portion of classifier weights predicting that class. It consists of two or three fully-connected layers with ReLU activation function between each consecutive pair of layers. In the hypernetwork, we use a hidden size of 256 or 512.
Classifier The universal classifier is a single fully-connected layer with the input size equal to the encoder embedding size and the output size equal to the number of classes. When using the strategy with embeddings enhancement, we freeze the classifier to get only the information about the behavior of the classifier, this means we do not calculate the gradient for the classifier in this step of the forward pass. Instead, gradient calculation for the classifier takes place during the classification of the query data.
E.2 TRAINING DETAILS
In all of the experiments described in the main body of this work, we utilize the switch and the embedding enhancement mechanisms. During training, we use the Adam optimizer (Kingma & Ba, 2014) and the MultiStepLR learning rate scheduler with the decay of 0.3 and learning rate starting from 0.01 or 0.001. We train HyperMAML for 4000 epochs on all the datasets, save for the simpler Omniglot→ EMNIST classification task, where we train for 2048 epochs instead. For the mini-ImageNet experiments we follow a strategy of using a pre-trained backbone, suggested by (Qiao et al., 2017; Rusu et al., 2019; Ye et al., 2021; Ye & Chao, 2021). More specifically, at the beginning of the training we initialize the Encoder of HyperMAML with weights of backbone pretrained for classification of all 64 classes from the mini-ImageNet training set. In practice, for consistency, we use the identical set of pretrained weights as Ye et al. (2021).
E.3 HYPERPARAMETERS
Below, we outline the hyperparameters of architecture and training procedures used in each experiment.
F IMPLEMENTATION DETAILS
We implement HyperMAML using the PyTorch framework (Paszke et al., 2019). We shall release the code publicly after the end of the review period. Each experiment described in this work was run on a single NVIDIA RTX 2080 GPU. | 1. What is the focus and contribution of the paper regarding the MAML framework?
2. What are the strengths and weaknesses of the proposed approach, particularly in comparison to other works like BOIL, Vop, and Samovar?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
To resolve a weight update limitation of MAML framework, This work proposes a hyperMAML which generates the offset equal to significant updates. Unlike the traditional gradient descent in innerLoop, hyperMAML produces the weight offset used in the weight update.
Strengths And Weaknesses
Strength
The problem stated in this work is reasonable and well organized. By utilizing encoded feature, ground truth and predicted output, estimating the offset used in the model update is interesting. Hypernetwork can be updated by SGD when optimizing global parameters in the outer loop.
Weaknesses
One step inner loop seems very promising. However, BOIL [1] also utilizes the one step update with a fast feature adaptation. In my opinion, how different is estimating the gradient ( offset) with generating the model weight directly? VoP [2] and samovar [3] are directly generating the model with hyper module. are there any advantages in this work compared to [2,3]?
what does “Classical initialization does not allow to update the universal weights and only the Hypernetwork’s parameters are changing” means?I can not understand the need of Eq. (7).
[1] Oh, Jaehoon, et al. "BOIL: Towards representation change for few-shot learning." ICLR 2021
[2] Kim, Jangho, et al. "Variational On-the-Fly Personalization." International Conference on Machine Learning. PMLR, 2022.
[3] Iakovleva, E., Verbeek, J., and Alahari, K. Meta-learning with shared amortized variational inference. In The 37th International Conference on Machine Learning, 2020
Clarity, Quality, Novelty And Reproducibility
The main idea is quite novel but it needs to be compared [2] [3] because these two methods also generate the weights. There is a code for reproducibility. |
ICLR | Title
HyperMAML: Few-Shot Adaptation of Deep Models with Hypernetworks
Abstract
The aim of Few-Shot learning methods is to train models which can easily adapt to previously unseen tasks, based on small amounts of data. One of the most popular and elegant Few-Shot learning approaches is Model-Agnostic Meta-Learning (MAML). The main idea behind this method is to learn the general weights of the meta-model, which are further adapted to specific problems in a small number of gradient steps. However, the model’s main limitation lies in the fact that the update procedure is realized by gradient-based optimisation. In consequence, MAML cannot always modify weights to the essential level in one or even a few gradient iterations. On the other hand, using many gradient steps results in a complex and time-consuming optimization procedure, which is hard to train in practice, and may lead to overfitting. In this paper, we propose HyperMAML, a novel generalization of MAML, where the training of the update procedure is also part of the model. Namely, in HyperMAML, instead of updating the weights with gradient descent, we use for this purpose a trainable Hypernetwork. Consequently, in this framework, the model can generate significant updates whose range is not limited to a fixed number of gradient steps. Experiments show that HyperMAML outperforms MAML in most cases and performs comparably to other state-of-the-art techniques in a number of standard Few-Shot learning benchmarks.
1 INTRODUCTION
In the typical Few-Shot learning setting, the aim is to adapt to new tasks under the assumption that only a few examples are given. As we know, people typically learn new tasks easily by using only a few training examples. On the contrary, a standard deep neural network must be trained on an extensive amount of data to obtain a similar accuracy. Thus, the aim of Few-Shot learning models is to bring neural networks closer to the human brain’s capabilities. The most famous and, in our opinion, the most elegant approach to Few-Shot learning is Model-Agnostic Meta-Learning (MAML) (Finn et al., 2017), where the model is trained to adapt universal weights to new Few-Shot learning tasks quickly. It seems that the brain’s neural networks can adapt to new tasks too, by applying the fact that during the process of evolution, some of its parts have developed universal weights which are easily adaptable to typical tasks we encounter in real life. Thus, the idea behind MAML gives us a possible insight into the working of the brain.
The fascinating factor of human intelligence is that the human learning process, although still not understood, is clearly not based on the gradient descent algorithm, as we cannot in general backpropagate the information (Lillicrap et al., 2020; Song et al., 2020; Whittington & Bogacz, 2019). Thus, from the biological point of view, the main limitation of MAML is the fact that it uses the gradient descent method for weight updates. The main research problem that we set for ourselves is whether one can modify MAML to be more biologically feasible, i.e. keep its ability to find universal weight but remove the necessity of using gradient-based update methods.
We solve this problem by constructing HyperMAML, a model which replaces the gradient optimization in the update of weights by trainable update procedure with the use of the Hypernetwork paradigm. Hypernetworks, introduced in (Ha et al., 2016) are defined as neural models that generate weights for a separate target network solving a specific task. In our model, HyperMAML, the Hypernetwork aggregates the information from the support set and produces an update to the main model. Thanks to such an approach, we can create various types of updates that are not limited to a
few gradient steps. Moreover, hypernetworks have previously been used as models for biological information processing, which suggests that such models are more biologically plausible than the gradient-based techniques such as MAML (Segovia-Juarez & Conrad, 1999). In practice, MAML works when there exist universal weights that are close enough to the optimal solution for each task. To visualize such a situation, we present a simple 2D example, where a single gradient update fails to sufficiently adapt the model to a given task – see Fig. 3. We cannot effectively switch weight in one gradient step. On the other hand, when we use many gradient steps, we obtain a complex optimization procedure that uses an inner and outer loop. Such a procedure can be seen as second-order optimization, which is complex to train (Finn et al., 2017). Contrary to MAML we do not need an inner loop in the optimization procedure, and consequently, we do not have second-order optimization. We also reduce the number of hyperparameters, which are used in the inner loop of MAML approach, which would need to be tuned in a grid search. As a result, our algorithm obtains better results than the classical MAML algorithm and produces results comparable to other state-of-the-art algorithms.
The contributions of our work can be summarized as follows:
• We introduce HyperMAML, a novel approach to the Few-Shot learning problem by aggregating information from the support set and directly producing weights updates.
• In HyperMAML, we do not use loss calculation or gradient backpropagation for the update to the new task, thus making the model more biologically feasible and computationally efficient.
• We significantly increase the update ability compared to the classical MAML algorithm, as evidenced by the increased accuracy in numerous benchmarks we perform.
2 RELATED WORK
The problem of Meta-Learning and Few-Shot learning (Hospedales et al., 2020; Schmidhuber, 1992; Bengio et al., 1992) has received a growing amount of attention from the scientific community over the recent years, with the abundance of methods emerging as a result. The idea of HyperMAML is influenced by two kinds of such methods:
Model-based methods aim to adapt to novel tasks quickly by utilizing mechanisms such as memory (Ravi & Larochelle, 2017; Santoro et al., 2016; Mishra et al., 2018; Zhen et al., 2020), Gaussian Processes (Rasmussen, 2003; Patacchiola et al., 2020; Wang et al., 2021; Sendera et al., 2021), or generating fast weights based on the support set with set-to-set architectures (Qiao et al., 2017; Bauer et al., 2017; Ye et al., 2021; Zhmoginov et al., 2022). Other approaches combine weight generators with gradient-based optimizers by choosing target weights from a set of templates (Zhao et al., 2020) or optimizing low-dimensional embeddings which condition the target weight generator (Rusu et al., 2019). The fast weights approaches can be interpreted as using Hypernetworks (Ha et al., 2016) – models which learn to generate the parameters of neural networks performing the designated tasks.
Similarly, HyperMAML utilizes a Hypernetwork to generate weights updates for performing specific tasks. The key difference is that in HyperMAML, the Hypernetwork is not the sole source of model weights. Instead, following (Finn et al., 2017), HyperMAML maintains a set of universal weights and uses the hypernetwork to generate the updates to those weights for novel tasks.
Optimization-based methods, such as MetaOptNet (Lee et al., 2019) are based on the idea of an optimization process over the support set within the Meta-Learning framework. Arguably, the most popular of this family of methods is Model-Agnostic Meta-Learning (MAML) (Finn et al., 2017), which inspired a multitude of research and numerous extensions to the original algorithm. This includes various techniques for stabilizing its training and improving performance, such as Multi-Step Loss Optimization, and scheduling the learning rate of the meta-optimizer (Antoniou et al., 2018) , using the Bayesian variant of MAML (Yoon et al., 2018), or making MAML permutation-invariant Ye & Chao (2021).
Due to a need for calculating second-order derivatives when computing the gradient of the metatraining loss, training the classical MAML introduces a significant computational overhead. The authors show that in practice the second-order derivatives can be omitted at the cost of small gradient estimation error and minimally reduced accuracy of the model (Finn et al., 2017; Nichol et al., 2018). Methods such as iMAML and Sign-MAML propose to solve this issue with implicit gradients or Sign-SGD optimization (Rajeswaran et al., 2019; Fan et al., 2021). The optimization process can also be improved by training not only the base initialization of the model but also the optimizer itself – namely, training a neural network that transforms gradients calculated w.r.t. loss of the support set predictions into weight updates (Munkhdalai & Yu, 2017; Munkhdalai et al., 2018; Li et al., 2017; Rajasegaran et al., 2020).
HyperMAML shares a key characteristic with the optimization-based methods – namely, it also utilizes a base set of weights, which are updated to obtain a model fit for a given task. The key difference between HyperMAML and MAML is that while MAML adapts to novel tasks through multiple steps of gradient-based optimization, HyperMAML generates the updates in a single step using a Hypernetwork. This makes HyperMAML more similar to methods like (Li et al., 2017; Munkhdalai & Yu, 2017), which generate weight updates through trained meta-optimizers. However, contrary to those approaches, in HyperMAML the Hypernetwork predicts the weight updates based on (i) latent representation of the support set, (ii) predictions of the base model for the support set, (iii) ground-truth labels of the support examples (see Fig. 2). Thus HyperMAML does not require calculating either the loss function or its gradients during generating of the task-specific weight updates, making it more computationally efficient.
3 HYPERMAML: HYPERNETWORK FOR FEW-SHOT LEARNING
In this section, we present our HyperMAML model for Few-Shot learning. First, we start by presenting background and notations for Few-Shot learning. Then we describe how the MAML algorithm works. Finally, we present HyperMAML, which can be understood as an extension of the classical MAML.
Algorithm 1 MAML - Model-Agnostic Meta-Learning (Finn et al., 2017) Require: D = {Tn}Nn=1: set of training tasks Require: α, β: step size hyper parameters
1: randomly initialize θ 2: while not done do 3: Sample batch of tasks B from D 4: for each task Ti = {Si,Qi} from batch B do 5: Evaluate ∇θLSi(fθ) with respect to examples from Si given loss by equation 2 6: Compute adapted parameters θ′i with gradient descent using formula given by eq. equa-
tion 1 7: end for 8: Update the global parameters of the model θ with formula given by eq. equation 4.
3.1 BACKGROUND
The terminology describing the Few-Shot learning setup is dispersive due to the colliding definitions used in the literature. Here, we use the nomenclature derived from the Meta-Learning literature, which is the most prevalent at the time of writing. Let S = {(xl,yl)}Ll=1 be a support-set containing input-output pairs, with L examples with the equal class distribution. In the one-shot scenario, each class is represented by a single example, and L = K, where K is the number of the considered classes in the given task. Whereas, for Few-Shot scenarios, each class usually has from 2 to 5 representatives in the support set S. Let Q = {(xm,ym)}Mm=1 be a query-set (sometimes referred to in the literature as a target-set), with M examples, where M is typically one order of magnitude greater than K. For clarity of notation, the support and query sets are grouped in a task T = {S,Q}. During the training stage, the models for Few-Shot applications are fed by randomly selected examples from training set D = {Tn}Nn=1, defined as a collection of such tasks.
During the inference stage, we consider task T∗ = {S∗,X∗}, where S∗ is a support set with the known class values for a given task, and X∗ is a set of query (unlabeled) inputs. The goal is to predict the class labels for query inputs x ∈ X∗, assuming support set S∗ and using the model trained on D.
Model-Agnostic Meta-Learning (MAML) is one of the current standard algorithms for Few-Shot learning, which learns the parameters of a model so that it can adapt to a new task in a few gradient steps.
We consider a model represented by a function fθ with parameters θ. In the Few-Shot problems fθ models discriminative probabilities for the classes, fθ(x) = p(y|x, θ). The standard MAML model is trained with the procedure given by Algorithm 1. In each of the training iterations the batch of tasks B is sampled from D. Further, for each task Ti = {Si,Qi} from B, MAML adapts the model’s parameters θ′i that are specific for a given task. The actual values of θ ′ i are calculated using one or more gradient descent updates. In the simplest case of one gradient iteration, the parameters are updated as follows:
θ′i = θ − α∇θLSi(fθ), (1)
where α is the step size that may be fixed as a hyperparameter or meta-learned, and the loss function for a set of observations Z is defined as LZ for the few shot scenario is represented as a simple cross-entropy:
LZ(fθ) = ∑
(xi,l,yi,l)∈Z
K∑ k=1 −yki,l log fθ,k(xi,j), (2)
where fθ,k(xi,j) denotes k-th output of the model fθ, for a given input xi,l, and yi,l is corresponding class in one-hot coding. For simplicity of notation, we will consider one gradient update for the rest of this section, but using multiple gradient updates is a straightforward extension. After calculating the tasks-specific updates θ′i the general model parameters are trained by optimizing for the performance
of fθ′i with respect to θ across tasks from batch B. More concretely, the meta-objective used to train the general parameters of the models is as follows:
LMAML(fθ) = ∑ Ti∈B LQi(fθ′) = ∑ Ti∈B LQi(fθ−α∇θLSi (fθ)), (3)
Note that the meta-optimization is performed over the model parameters θ, whereas the objective is computed using the updated model parameters θ′. In effect, our proposed method aims to optimize the model parameters such that one or a small number of gradient steps on a new task will produce maximally effective behavior on that task.
The meta-optimization across tasks is performed via stochastic gradient descent (SGD) such that the model parameters θ are updated as follows:
θ ← θ − β∇θLMAML(fθ), (4)
where β is the meta step size.
During the inference stage, in order to perform predictions for newly observed task T∗ = {S∗,X∗} the loss function LS∗(fθ) is calculated first using eq. equation 2. Next, the parameters θ′∗ for task T∗ are calculated from eq. equation 1. The final predictions for query examples X∗ are performed by the model fθ′∗ , where for selected query example xq ∈ X∗ we have p(y|xq, θ ′ ∗) = fθ′∗(xq, θ ′ ∗).
The main limitation of the approach is that it produces the general weights for all possible tasks, and the adjustment is performed via a gradient-based approach performed on the support set. For some non-trivial challenging tasks, the dedicated parameters θ′∗ may be located far from the base weights, θ. Consequently, the adaptation procedure would require significantly more gradient steps, the training may be unstable, and the model will tend to overfit to support set. To overcome this limitation, we propose to replace the gradient-based adaption with the Hypernetwork approach, where the update step is returned by an additional deep model that extracts the information from the support set (see toy example in Fig. 3).
3.2 HYPERMAML - OVERVIEW
We introduce our HyperMAML – a model that utilizes Hypernetworks for modeling weights updates in the classical MAML algorithm. The main idea of the proposed updating scenario is to use information extracted from support examples and predictions given by universal weights to find optimal updates dedicated to a given task. Thanks to this approach, we can switch the classifier’s parameters between completely different tasks based on the support set and existing prediction of the universal weights.
The architecture of the HyperMAML is provided in Fig. 2. In this section we present the model for one-shot scenario, and further discuss how to extend it to Few-Shot problems. We aim at predicting the class distribution p(y|xq,S), assuming given single query example xq , and the set of support examples S. Following the idea from MAML we consider the parameterized function fθ, that models the discriminative distribution for the classes. In addition, in our architecture we distinguish the trainable encoding network E(·), that transforms data to low-dimensional representation. We postulate to calculate p(y|xq, θ′) = fθ′(eq), where eq is the query example xq transformed using encoder E(·), and θ′ represents the updated parameters for a considered task, θ′ = θ+∆θ. Compared to gradient-based adaptation step described by equation 1 used to calculate ∆θ we propose to predict the values using hypernetwork, directly from support set.
Each of the inputs from support set XS is transformed by Encoder E(·) in order to obtain lowdimensional matrix of embeddings ES = [eS,1, . . . , eS,K ]T. Next, the corresponding class labels for support examples, YS = [yS,1, . . . ,yS,K ]T are concatenated to the corresponding embeddings stored in the rows of matrix ES . In addition, we also calculate the predicted values for the examples from the support set using the general model fθ(ES) = ŶS , and also concatenate them to ES . The matrix transformed support inputs ES , together with true support labels YS , and corresponding predictions ŶS returned by general model are delivered as an input to the hypernetwork H(·) that returns the update ∆θ. The hypernetwork consists of fully-connected layers with ReLU activations – see section E.1 in the Appendix for details. The parameters for final target model are calculated with the following formula:
θ′ = θ +∆θ = θ +H(ES , ŶS ,YS). (5)
Practically, the Hypernetwork observes the support examples with the corresponding true values and decides how the global parameters θ should be adjusted to the considered task. In addition, the predictions from global model fθ are also delivered to the model in order to identify the misclassifications and try to correct them during the update state.
3.3 HYPERMAML - TRAINING
For training the model we assume that encoder E(·) is parametrized by γ, E := Eγ , and the hypernetwork H(·) by η, H := Hη. The training procedure is described in Algorithm 2. First, we sample the batch of tasks B from the given dataset D. Next, for each task Ti in batch B we calculate the update ∆θi using the support set Si, and provide the updated parameters θ′ according to the rule given by eq. equation 5. Finally, the objective to train the parameters of the system is calculated using the query sets from the batch tasks Ti:
LHyperMAML(fθ) = ∑ Ti∈B LQi(fθ′) = ∑ Ti∈B LQi(fθ+∆θ ), (6)
where LQi(fθ′) is given by eq. equation 2. The parameters of the encoder, hypernetwork, and global parameters θ represent the meta parameters of the system, and they are updated with stochastic gradient descent (SGD) by optimizing LHyperMAML(fθ).
Adaptation to the Few-Shot scenario. The proposed method can be easily extended for Few-Shot scenarios following the aggregation technique from (Sendera et al., 2022). For our approach, we aggregate the embedding values of the support examples from the same class using mean operation. In addition, the corresponding predictions within the class are also averaged and concatenated to the averaged per class embedding together with the true class label, and further processed via the hypernetwork.
Warming-up universal weights In practice, it is not trivial to initialize the universal weights of HyperMAML. Classical initialization does not allow to update the universal weights and only the Hypernetwork’s parameters are changing. To solve this problem we use a smooth transition from gradient to Hypernetwork update:
Algorithm 2 HyperMAML Require: D = {Tn}Nn=1: set of training tasks Require: β: step size hyper parameter
1: randomly initialize θ, γ, η 2: while not done do 3: Sample batch of tasks B from D 4: for each task Ti = {Si,Qi} from batch B do 5: Compute adapted parameters θ′i from Si using formula given by eq. equation 5 6: end for 7: Calculate the loss LHyperMAML(fθ) given by eq. equation 6. 8: θ ← θ − β∇θLHyperMAML(fθ) ▷ Update the global target parameters θ 9: η ← η − β∇ηLHyperMAML(fθ) ▷ Update parameters of the hypernetwork Hη 10: γ ← γ − β∇γLHyperMAML(fθ) ▷ Update the parameters of the Encoder Eγ
θ′ = θ + λ ·H(ES , ŶS ,YS)− (1− λ) · α∇θLSi(fθ) (7)
where λ is changing from zero to one in a few initial training epochs.
4 EXPERIMENTS
In the typical Few-Shot learning setting, making a valuable and fair comparison between proposed models is often complicated because of the existence of the significant differences in architectures and implementations of known methods. In order to limit the influence of the deeper backbone (feature extractor) architectures, we follow the unified procedure proposed by (Chen et al., 2019) 1.
In all of the reported experiments, the tasks consist of 5 classes (5-way) and 1 or 5 support examples (1 or 5-shot). Unless indicated otherwise, all compared models use a known and widely utilized backbone consisting of four convolutional layers (each consisting of a 2D convolution, a batch-norm layer, and a ReLU non-linearity; each layer consists of 64 channels (Chen et al., 2019). The models are trained from scratch, except for the models trained on mini-ImageNet where they are initialized with a pretrained backbone, following (Qiao et al., 2017; Rusu et al., 2019; Ye et al., 2021). In all experiments, the query set of each task consists of 16 samples for each class (80 in total). We split the datasets into the standard train, validation, and test class subsets, used commonly in the literature (Ravi & Larochelle, 2017; Chen et al., 2019; Patacchiola et al., 2020). We provide the additional training details in Section E of the Appendix.
4.1 CLASSIFICATION
First, we consider the classical Few-Shot learning scenario. We benchmark the performance of the HyperMAML and other methods on two challenging and widely considered datasets: Caltech-USCD Birds (CUB) (Wah et al., 2011) and mini-ImageNet (Ravi & Larochelle, 2017). In case of the mini-ImageNet dataset we initialize the backbone with pretrained weights, following (Qiao et al., 2017; Ye et al., 2021). We compare HyperMAML to a number of MAML-related and Hypernetworkbased methods, as well as the current state-of-the-art algorithms in the tasks of 1-shot and 5-shot classification, and report the results in Table 1. We report a comparison to a wider pool of few-shot learning methods, as well as the results of models utilizing a larger backbone in the Tables 6 and 7 in the Appendix.
In the 1-shot scenario, HyperMAML yields top performing results (66.11%) on the CUB dataset, inferior only to FEAT (Ye et al., 2021) (68.87%). On the mini-ImageNet dataset, HyperMAML is among the five best methods, achieving the accuracy of 53.41%. In the 5-shot setting, HyperMAML is among the top-3 best performing models achieving both on the CUB and mini-ImageNet datasets,
1An anonymized version of our code is available at https://anonymous.4open.science/r/ few-shot-hypernets-public-DB4F. We shall release the code with our experiments after the end of the review period.
achieving 78.89% and 68.76% accuracy, with FEAT (Ye et al., 2021) and HyperShot (Sendera et al., 2022) outperforming it by a small margin.
The obtained results show that HyperMAML achieves performance better or comparable to a variety of Few-Shot learning methods, in particular MAML (Finn et al., 2017), as well as techniques which derive from it (Antoniou et al., 2018; Rajeswaran et al., 2019; Fan et al., 2021; Yoon et al., 2018; Ye & Chao, 2021).
4.2 CROSS-DOMAIN ADAPTATION
In the cross-domain adaptation setting, the model is evaluated on tasks coming from a different distribution than the one it had been trained on. Therefore, such a task is more challenging than standard classification and is a plausible indicator of a model’s ability to generalize. In order to benchmark the performance of HyperMAML in cross-domain adaptation, we combine two datasets so that the training fold is drawn from the first dataset and validation and the testing fold – from another one. We report the results in Table 2. In the task of 1-shot Omniglot→EMNIST classification, HyperMAML achieves the second-best result (79.84%), with HyperShot+finetuning (80.65%) being the top one. Compared to the other methods, we observe relatively smaller performance growth as more data becomes available. In the 5-shot Omniglot→EMNIST classification task HyperMAML yields comparable results (89.22%) to HyperShot (Sendera et al., 2022) (90.81%) and DKT (Patacchiola et al., 2020) (90.30%), which are the state-of-the-art in this setting. In the most challenging task of mini-ImageNet→CUB classification, our method performs comparably to baseline methods such as MAML, ProtoNet and Matching Net (Finn et al., 2017; Vinyals et al., 2016; Snell et al., 2017), particularly in the 1-shot setting.
4.3 PARAMETER UPDATE MAGNITUDE
Next, we consider the ability of HyperMAML to produce the correct weight updates. One of the drawbacks of MAML is that, in practice, gradient updates change weights very slowly, especially when meta tasks require completely different weights (see Fig. 3). On the other hand, Hypernetworks can produce significant updates. To verify such behaviour on a real, non-trivial dataset, we calculate the norm of classifier weight updates in MAML and HyperMAML trained for Omniglot→EMNIST classification and report the results in Table 3.
As we can see, HyperMAML produces larger updates, which, combined with higher accuracy than MAML (see Table 2), suggests faster and more accurate convergence.
In the case of classical MAML, there exist few modifications in updating procedures (inner loop) like MAML++ (Antoniou et al., 2018) or Meta-SGD (Li et al., 2017). However, such modifications are required in classical MAML since few gradient updates do not always guarantee convergence. On the contrary, HyperMAML provides a possible solution to the problem by a novel update that is not generated directly by gradient descent, but rather by a forward pass of a Hypernetwork.
4.4 COMPUTATIONAL EFFICIENCY
Finally, we verify the hypothesis that HyperMAML offers an increased computational efficiency compared to MAML. To this end, we measure the times of processing the entire Omniglot→ EMNIST test dataset (600 tasks in total) by MAML with different numbers of gradient steps and HyperMAML and report the results in Table 4, and for other datasets in Table 8 in the Appendix.
We find that processing the test data with HyperMAML takes approximately the same time as using MAML with just 2 gradient updates. We also note that even given the budget of 100 gradient updates, MAML never matches the accuracy achieved by a single update generated by HyperMAML.
5 CONCLUSIONS
In this work, we introduced HyperMAML – a novel Meta-Learning algorithm strongly motivated by MAML (Finn et al., 2017). The crucial difference between the two methods lies in the fact that in HyperMAML the update of the model is given not by the gradient optimization, but by the trainable Hypernetwork. Consequently, HyperMAML is more computationally efficient than MAML, as during adaptation it performs only a single parameter update. Moreover, HyperMAML is not only more biologically feasible as it does not use backpropagation, but it can adapt easier to different settings and has a smaller number of hyperparameters. Our experiments show that HyperMAML outperforms the classical MAML in a number of standard Few-Shot learning benchmarks and achieves results better or comparable to various other state-of-the-art methods in most cases.
Our results indicate that Few-Shot weight adaptation can be performed directly, without calculating the actual loss or gradients – a more biologically plausible learning method (Lillicrap et al., 2020; Song et al., 2020; Whittington & Bogacz, 2019). Moreover, HyperMAML adapts to new tasks quicker than MAML, which needs to be tuned with many gradient updates. Thus, HyperMAML is a step toward more efficient and environment-friendly Meta-Learning techniques.
B ABLATION STUDY OF MECHANISMS USED IN HYPERMAML
In this section, we present two mechanisms we utilize when training HyperMAML.
Switching mechanism is a smooth transition between training the convolutional encoder through MAML and HyperMAML objective. We consider the MAML warm-up as a starting point of the training loop and then smoothly move towards HyperMAML training. During training, we define two "milestone" epochs (see Section E.3, between which the transition occurs. During the transition, we continuously decrease the participation of the MAML objective in the training process. It is done by multiplying MAML loss by p, ranging from 1.0 to 0.0 for a given number of epochs, and multiplying the HyperMAML loss by 1− p. Our motivation for this mechanism is to train a better universal optimizing the MAML objective during the warm-up part of the training and then Switch to the HyperMAML objective gradually.
Enhancement of the embeddings is another mechanism in our framework. When preparing the input to the Hypernetwork from the support set, we first obtain support embeddings from the encoder. Then, we forward those embeddings through the universal classifier and obtain its predictions for support examples. We concatenate those predictions to the support embeddings, as well as their respective ground-truth labels. The whole process is visualized in Figure 4. Our motivation to perform such an operation is to give the Hypernetwork the information about the current decision of the classifier, to better estimate the updates to its weights. We note that even though the Hypernetwork generates the weights of the classifier based on the enhanced support embeddings, the generated downstream classifier does not use enhancements when processing the query set. Instead, it only processes the raw embeddings obtained from the encoder (see Figure 2 from the main paper).
We perform an ablation study of both mechanisms on the task 5-shot 1-way Omniglot→EMNIST classification. The results, reported in Table 5, indicate that both mechanisms utilized individually improve the performance of HyperMAML, and the combination of the two yields the best results.
C FULL CLASSIFICATION RESULTS
We provide an expanded version of Table 1 from the main paper, with numerous additional baseline methods – see Table 6. We also report the performance of HyperMAML, as well as several baseline methods trained with ResNet-12 He et al. (2015) as backbone on the mini-ImageNet dataset in Table 7.
D COMPUTATIONAL EFFICIENCY – RESULTS ON NATURAL IMAGE DATASETS
In this section, we perform similar experiments to those described in Section 4.3 of the main paper and measure the inference time of HyperMAML and MAML with different numbers of gradient steps on CUB and mini-ImageNet datasets. We summarize the results in Table 4. Similarly to the benchmark performed on smaller images from the Omniglot→ EMNIST dataset, the inference time of HyperMAML is comparable to MAML with two gradient steps. Likewise, MAML never achieves accuracy higher than HyperMAML.
As opposed to Section 4.3 of the main paper, we report the accuracies of MAML only up to seven gradient steps. This is due to an insufficient amount of GPU memory available for making more steps in the MAML implementation we used (Chen et al., 2019). We also note that the accuracies of MAML reported here are significantly higher than the ones reported in Table 1 of the main paper. The MAML accuracies from that table were previously reported in the literature (Patacchiola et al., 2020), whereas the results in Table 4 have been obtained by the MAML implementation in our codebase (Chen et al., 2019).
E TRAINING DETAILS
In this section, we present details of the training and architecture overview.
E.1 ARCHITECTURE OVERVIEW
The architecture of HyperMAML consists of the following parts (as outlined in the Figure 2 in the main body of the work):
Encoder For each experiment described in the main body of this work, we utilize a shallow convolutional encoder (feature extractor), commonly used in the literature (Finn et al., 2017; Chen et al., 2019; Patacchiola et al., 2020). This encoder consists of four convolutional layers, each
consisting of a convolution, batch normalization, and ReLU nonlinearity. Each of the convolutional layers has an input and output size of 64, except for the first layer, where the input size is equal to the number of image channels. We also apply max-pooling between each convolution, by which the resolution of the processed feature maps is decreased by half. The output of the encoder is flattened to process it in the next layers.
For the mini-ImageNet dataset we additionally test the performance of HyperMAML with a larger backbone – namely ResNet-12 He et al. (2015).
Hypernetwork The Hypernetwork transforms the enhanced embeddings of the support examples of each class in a task into the updates for the portion of classifier weights predicting that class. It consists of two or three fully-connected layers with ReLU activation function between each consecutive pair of layers. In the hypernetwork, we use a hidden size of 256 or 512.
Classifier The universal classifier is a single fully-connected layer with the input size equal to the encoder embedding size and the output size equal to the number of classes. When using the strategy with embeddings enhancement, we freeze the classifier to get only the information about the behavior of the classifier, this means we do not calculate the gradient for the classifier in this step of the forward pass. Instead, gradient calculation for the classifier takes place during the classification of the query data.
E.2 TRAINING DETAILS
In all of the experiments described in the main body of this work, we utilize the switch and the embedding enhancement mechanisms. During training, we use the Adam optimizer (Kingma & Ba, 2014) and the MultiStepLR learning rate scheduler with the decay of 0.3 and learning rate starting from 0.01 or 0.001. We train HyperMAML for 4000 epochs on all the datasets, save for the simpler Omniglot→ EMNIST classification task, where we train for 2048 epochs instead. For the mini-ImageNet experiments we follow a strategy of using a pre-trained backbone, suggested by (Qiao et al., 2017; Rusu et al., 2019; Ye et al., 2021; Ye & Chao, 2021). More specifically, at the beginning of the training we initialize the Encoder of HyperMAML with weights of backbone pretrained for classification of all 64 classes from the mini-ImageNet training set. In practice, for consistency, we use the identical set of pretrained weights as Ye et al. (2021).
E.3 HYPERPARAMETERS
Below, we outline the hyperparameters of architecture and training procedures used in each experiment.
F IMPLEMENTATION DETAILS
We implement HyperMAML using the PyTorch framework (Paszke et al., 2019). We shall release the code publicly after the end of the review period. Each experiment described in this work was run on a single NVIDIA RTX 2080 GPU. | 1. What is the focus and contribution of the paper on few-shot learning?
2. What are the strengths and weaknesses of the proposed HyperMAML method compared to other baselines?
3. Do you have any concerns regarding the necessity and novelty of HyperMAML?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any suggestions or recommendations for improving the paper or its contributions? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes HyperMAML, a few-shot learning method that uses a hypernetwork to generate the model adaptation as an alternative to MAML’s gradient descent strategy. By generating the updates in a single forward pass, HyperMAML offers the promise in being more computationally efficient. Standard FSL experiments with Conv4 encoder on CUB and mini-ImageNet and cross-domain experiments on Omniglot -> EMNIST and mini-ImageNet -> CUB show that HyperMAML is comparable to many other FSL baselines.
Strengths And Weaknesses
Strengths
S1. As with other hypernetwork-based approaches to FSL, HyperMAML’s adaptation of the weights to a new task involves a forward pass, without needing any optimization. This makes HyperMAML faster at task adaptation compared to the original MAML formulation, which learns a weight initialization, but requires several steps of gradient updates for adaptation. This advantage is especially clear in Section 4.4 and Table 4.
S2. Experiments are set-up in a fairly standard way, with plenty of relevant (though slightly older) baselines. I would have liked to see more experiments with stronger backbones (which the authors explicitly avoided), but using the standard Conv4 at least makes it easy to have a sense of relative performance. HyperMAML does seem to outperform MAML, but unclear how much of this has to do with the encoder.
S3. The writing is generally easy to understand, though there a number of errors throughout the paper (see Miscellaneous for non-exhaustive list). A thorough round of edits is necessary. In particular, there is some imprecise terminology and notation being used.
Weaknesses
W1. Why is HyperMAML necessary? Instead of combining hypernetworks with MAML, why not just use the hypernetwork to directly generate the weights, without the weight prior? If the hypernetwork can learn to generate the update, it would seem like the hypernetwork could also learn a “bias” (i.e. the “universal weights”) while generating the new weights directly.
W2. Motivation: The authors criticize MAML’s second-order optimization scheme as complicated, but first-order versions of MAML (e.g. FOMAML [1], REPTILE [2]) have been previously explored and found to be effective. As a result, this paper’s criticism of MAML’s computational overhead already has a number of previously proposed mitigation strategies.
W3. If I understand correctly, HyperMAML doesn’t generate updates for the encoder E, likely due to difficulty of generating weight updates of such large dimensionality. As such, HyperMAML isn’t a drop-in replacement for MAML. Additionally, this again evokes questions of what HyperMAML adds over normal hypernetwork-based approaches (W1).
W4. Empirical results. The few-shot accuracy of HyperMAML in Tables 1 and 2 lags behind several of the baselines. While I’m not the type of reviewer who insists that a paper must have SotA results on everything, the results of HyperMAML aren’t very convincing, similar to other MAML-like methods, and not comparable with more recent SotA results, which aren’t included in these tables.
Miscellaneous:
Introduction/Conclusion: In general, I recommend avoiding over-indexing on drawing connections between neural networks and human intelligence. The analogies between neural network algorithms and neurons aren’t direct, and the methods proposed here are only tenuously connected to biological phenomena, which aren’t well-understood yet. Unless the goal is explicitly trying to understand biological intelligence, there is no need actively pursue “biological feasibility” in an ML paper.
Many citations missing the conference/journal
There are large spaces between the bold paragraph headers and the text.
Alg 1: “eq. equation 1”, “eq. equation 4”
Starting a sentence with “Whereas” as done in Section 3.1 technically results in a sentence fragment.
Sec 3.1 terminology: The definition of the meta-eval tasks T_* = {S_*, X_*} could be defined better, as S_* and X_* aren’t mathematically defined. Also, as a nitpick, we typically refer to tasks T and T_* as belong to meta-train vs meta-val/meta-test to avoid confusion with the training and inference that occurs within each task in both T and T_*. Someone less familiar with meta-learning may get confused by the terminology here.
“In the Few-Shot problems” => “In Few-Shot problems,” though “few-shot” is probably more appropriate than “Few-Shot” throughout this paper.
Between Eqs 1 + 2: “is” is used twice
Eq 2: What is x_i,j? How does that differ from x_i,l? Also y^k_i,l doesn’t match the description of y_i,l as a one-hot encoding.
“eq. equation 1” and “eq. equation 2” appear again in Section 3.1
Fig 2: The support set S isn’t script/mathcal mode, which is the way it was introduced in the text.
Table 3: I understand what the authors are trying to show, but without accuracy, update magnitude doesn’t necessarily mean much. MAML’s update magnitude can be arbitrarily scaled by adjusting the inner loop step size.
Table 4: This comparison is nice, but it is somewhat inhibited by the fact that MAML can’t match HyperMAML in performance, even with an unbounded number of steps. I suspect this might be due to MAML also updating the encoder? If so, a fairer comparison may be only using MAML for the final classifier. Moreover, the accuracy of MAML appears to have plateaued very early: one step already leads to an accuracy very close to the final value. At one step, MAML is actually faster than HyperMAML.
[1] Finn et al. Model-agnostic meta-learning for fast adaptation of deep networks. 2017.
[2] Nichol et al. On first-order meta-learning algorithms. 2018
Clarity, Quality, Novelty And Reproducibility
As mentioned above, I find the paper fairly easy to understand, though another thorough round of edits is required before the writing is publication quality.
As I expressed above, I’m not sold on HyperMAML’s novelty. While I’m not aware of any prior works explicitly seeking to utilize hypernetworks for reducing MAML’s update steps, in practice, it seems to me that HyperMAML reduces to many pre-existing works. In particular, HyperMAML seems equivalent to learning a bias term for a standard hypernetwork-based method for FSL. The implementation of HyperMAML is also very reminiscent of FEAT, in that it uses an encoder to embed the support set, then takes the mean, then uses another learned function (FEAT: various set-to-set functions; HyperMAML: the MLP hypernetwork) to produce the classifier. I find however that FEAT’s approach more general, as it’s use of set-to-set functions generalizes to arbitrary number of classes, while implementing HyperMAML’s hypernetwork with an MLP means it must have a fixed number of classes.
Anonymous link to code is provided for reproduction. |
ICLR | Title
HyperMAML: Few-Shot Adaptation of Deep Models with Hypernetworks
Abstract
The aim of Few-Shot learning methods is to train models which can easily adapt to previously unseen tasks, based on small amounts of data. One of the most popular and elegant Few-Shot learning approaches is Model-Agnostic Meta-Learning (MAML). The main idea behind this method is to learn the general weights of the meta-model, which are further adapted to specific problems in a small number of gradient steps. However, the model’s main limitation lies in the fact that the update procedure is realized by gradient-based optimisation. In consequence, MAML cannot always modify weights to the essential level in one or even a few gradient iterations. On the other hand, using many gradient steps results in a complex and time-consuming optimization procedure, which is hard to train in practice, and may lead to overfitting. In this paper, we propose HyperMAML, a novel generalization of MAML, where the training of the update procedure is also part of the model. Namely, in HyperMAML, instead of updating the weights with gradient descent, we use for this purpose a trainable Hypernetwork. Consequently, in this framework, the model can generate significant updates whose range is not limited to a fixed number of gradient steps. Experiments show that HyperMAML outperforms MAML in most cases and performs comparably to other state-of-the-art techniques in a number of standard Few-Shot learning benchmarks.
1 INTRODUCTION
In the typical Few-Shot learning setting, the aim is to adapt to new tasks under the assumption that only a few examples are given. As we know, people typically learn new tasks easily by using only a few training examples. On the contrary, a standard deep neural network must be trained on an extensive amount of data to obtain a similar accuracy. Thus, the aim of Few-Shot learning models is to bring neural networks closer to the human brain’s capabilities. The most famous and, in our opinion, the most elegant approach to Few-Shot learning is Model-Agnostic Meta-Learning (MAML) (Finn et al., 2017), where the model is trained to adapt universal weights to new Few-Shot learning tasks quickly. It seems that the brain’s neural networks can adapt to new tasks too, by applying the fact that during the process of evolution, some of its parts have developed universal weights which are easily adaptable to typical tasks we encounter in real life. Thus, the idea behind MAML gives us a possible insight into the working of the brain.
The fascinating factor of human intelligence is that the human learning process, although still not understood, is clearly not based on the gradient descent algorithm, as we cannot in general backpropagate the information (Lillicrap et al., 2020; Song et al., 2020; Whittington & Bogacz, 2019). Thus, from the biological point of view, the main limitation of MAML is the fact that it uses the gradient descent method for weight updates. The main research problem that we set for ourselves is whether one can modify MAML to be more biologically feasible, i.e. keep its ability to find universal weight but remove the necessity of using gradient-based update methods.
We solve this problem by constructing HyperMAML, a model which replaces the gradient optimization in the update of weights by trainable update procedure with the use of the Hypernetwork paradigm. Hypernetworks, introduced in (Ha et al., 2016) are defined as neural models that generate weights for a separate target network solving a specific task. In our model, HyperMAML, the Hypernetwork aggregates the information from the support set and produces an update to the main model. Thanks to such an approach, we can create various types of updates that are not limited to a
few gradient steps. Moreover, hypernetworks have previously been used as models for biological information processing, which suggests that such models are more biologically plausible than the gradient-based techniques such as MAML (Segovia-Juarez & Conrad, 1999). In practice, MAML works when there exist universal weights that are close enough to the optimal solution for each task. To visualize such a situation, we present a simple 2D example, where a single gradient update fails to sufficiently adapt the model to a given task – see Fig. 3. We cannot effectively switch weight in one gradient step. On the other hand, when we use many gradient steps, we obtain a complex optimization procedure that uses an inner and outer loop. Such a procedure can be seen as second-order optimization, which is complex to train (Finn et al., 2017). Contrary to MAML we do not need an inner loop in the optimization procedure, and consequently, we do not have second-order optimization. We also reduce the number of hyperparameters, which are used in the inner loop of MAML approach, which would need to be tuned in a grid search. As a result, our algorithm obtains better results than the classical MAML algorithm and produces results comparable to other state-of-the-art algorithms.
The contributions of our work can be summarized as follows:
• We introduce HyperMAML, a novel approach to the Few-Shot learning problem by aggregating information from the support set and directly producing weights updates.
• In HyperMAML, we do not use loss calculation or gradient backpropagation for the update to the new task, thus making the model more biologically feasible and computationally efficient.
• We significantly increase the update ability compared to the classical MAML algorithm, as evidenced by the increased accuracy in numerous benchmarks we perform.
2 RELATED WORK
The problem of Meta-Learning and Few-Shot learning (Hospedales et al., 2020; Schmidhuber, 1992; Bengio et al., 1992) has received a growing amount of attention from the scientific community over the recent years, with the abundance of methods emerging as a result. The idea of HyperMAML is influenced by two kinds of such methods:
Model-based methods aim to adapt to novel tasks quickly by utilizing mechanisms such as memory (Ravi & Larochelle, 2017; Santoro et al., 2016; Mishra et al., 2018; Zhen et al., 2020), Gaussian Processes (Rasmussen, 2003; Patacchiola et al., 2020; Wang et al., 2021; Sendera et al., 2021), or generating fast weights based on the support set with set-to-set architectures (Qiao et al., 2017; Bauer et al., 2017; Ye et al., 2021; Zhmoginov et al., 2022). Other approaches combine weight generators with gradient-based optimizers by choosing target weights from a set of templates (Zhao et al., 2020) or optimizing low-dimensional embeddings which condition the target weight generator (Rusu et al., 2019). The fast weights approaches can be interpreted as using Hypernetworks (Ha et al., 2016) – models which learn to generate the parameters of neural networks performing the designated tasks.
Similarly, HyperMAML utilizes a Hypernetwork to generate weights updates for performing specific tasks. The key difference is that in HyperMAML, the Hypernetwork is not the sole source of model weights. Instead, following (Finn et al., 2017), HyperMAML maintains a set of universal weights and uses the hypernetwork to generate the updates to those weights for novel tasks.
Optimization-based methods, such as MetaOptNet (Lee et al., 2019) are based on the idea of an optimization process over the support set within the Meta-Learning framework. Arguably, the most popular of this family of methods is Model-Agnostic Meta-Learning (MAML) (Finn et al., 2017), which inspired a multitude of research and numerous extensions to the original algorithm. This includes various techniques for stabilizing its training and improving performance, such as Multi-Step Loss Optimization, and scheduling the learning rate of the meta-optimizer (Antoniou et al., 2018) , using the Bayesian variant of MAML (Yoon et al., 2018), or making MAML permutation-invariant Ye & Chao (2021).
Due to a need for calculating second-order derivatives when computing the gradient of the metatraining loss, training the classical MAML introduces a significant computational overhead. The authors show that in practice the second-order derivatives can be omitted at the cost of small gradient estimation error and minimally reduced accuracy of the model (Finn et al., 2017; Nichol et al., 2018). Methods such as iMAML and Sign-MAML propose to solve this issue with implicit gradients or Sign-SGD optimization (Rajeswaran et al., 2019; Fan et al., 2021). The optimization process can also be improved by training not only the base initialization of the model but also the optimizer itself – namely, training a neural network that transforms gradients calculated w.r.t. loss of the support set predictions into weight updates (Munkhdalai & Yu, 2017; Munkhdalai et al., 2018; Li et al., 2017; Rajasegaran et al., 2020).
HyperMAML shares a key characteristic with the optimization-based methods – namely, it also utilizes a base set of weights, which are updated to obtain a model fit for a given task. The key difference between HyperMAML and MAML is that while MAML adapts to novel tasks through multiple steps of gradient-based optimization, HyperMAML generates the updates in a single step using a Hypernetwork. This makes HyperMAML more similar to methods like (Li et al., 2017; Munkhdalai & Yu, 2017), which generate weight updates through trained meta-optimizers. However, contrary to those approaches, in HyperMAML the Hypernetwork predicts the weight updates based on (i) latent representation of the support set, (ii) predictions of the base model for the support set, (iii) ground-truth labels of the support examples (see Fig. 2). Thus HyperMAML does not require calculating either the loss function or its gradients during generating of the task-specific weight updates, making it more computationally efficient.
3 HYPERMAML: HYPERNETWORK FOR FEW-SHOT LEARNING
In this section, we present our HyperMAML model for Few-Shot learning. First, we start by presenting background and notations for Few-Shot learning. Then we describe how the MAML algorithm works. Finally, we present HyperMAML, which can be understood as an extension of the classical MAML.
Algorithm 1 MAML - Model-Agnostic Meta-Learning (Finn et al., 2017) Require: D = {Tn}Nn=1: set of training tasks Require: α, β: step size hyper parameters
1: randomly initialize θ 2: while not done do 3: Sample batch of tasks B from D 4: for each task Ti = {Si,Qi} from batch B do 5: Evaluate ∇θLSi(fθ) with respect to examples from Si given loss by equation 2 6: Compute adapted parameters θ′i with gradient descent using formula given by eq. equa-
tion 1 7: end for 8: Update the global parameters of the model θ with formula given by eq. equation 4.
3.1 BACKGROUND
The terminology describing the Few-Shot learning setup is dispersive due to the colliding definitions used in the literature. Here, we use the nomenclature derived from the Meta-Learning literature, which is the most prevalent at the time of writing. Let S = {(xl,yl)}Ll=1 be a support-set containing input-output pairs, with L examples with the equal class distribution. In the one-shot scenario, each class is represented by a single example, and L = K, where K is the number of the considered classes in the given task. Whereas, for Few-Shot scenarios, each class usually has from 2 to 5 representatives in the support set S. Let Q = {(xm,ym)}Mm=1 be a query-set (sometimes referred to in the literature as a target-set), with M examples, where M is typically one order of magnitude greater than K. For clarity of notation, the support and query sets are grouped in a task T = {S,Q}. During the training stage, the models for Few-Shot applications are fed by randomly selected examples from training set D = {Tn}Nn=1, defined as a collection of such tasks.
During the inference stage, we consider task T∗ = {S∗,X∗}, where S∗ is a support set with the known class values for a given task, and X∗ is a set of query (unlabeled) inputs. The goal is to predict the class labels for query inputs x ∈ X∗, assuming support set S∗ and using the model trained on D.
Model-Agnostic Meta-Learning (MAML) is one of the current standard algorithms for Few-Shot learning, which learns the parameters of a model so that it can adapt to a new task in a few gradient steps.
We consider a model represented by a function fθ with parameters θ. In the Few-Shot problems fθ models discriminative probabilities for the classes, fθ(x) = p(y|x, θ). The standard MAML model is trained with the procedure given by Algorithm 1. In each of the training iterations the batch of tasks B is sampled from D. Further, for each task Ti = {Si,Qi} from B, MAML adapts the model’s parameters θ′i that are specific for a given task. The actual values of θ ′ i are calculated using one or more gradient descent updates. In the simplest case of one gradient iteration, the parameters are updated as follows:
θ′i = θ − α∇θLSi(fθ), (1)
where α is the step size that may be fixed as a hyperparameter or meta-learned, and the loss function for a set of observations Z is defined as LZ for the few shot scenario is represented as a simple cross-entropy:
LZ(fθ) = ∑
(xi,l,yi,l)∈Z
K∑ k=1 −yki,l log fθ,k(xi,j), (2)
where fθ,k(xi,j) denotes k-th output of the model fθ, for a given input xi,l, and yi,l is corresponding class in one-hot coding. For simplicity of notation, we will consider one gradient update for the rest of this section, but using multiple gradient updates is a straightforward extension. After calculating the tasks-specific updates θ′i the general model parameters are trained by optimizing for the performance
of fθ′i with respect to θ across tasks from batch B. More concretely, the meta-objective used to train the general parameters of the models is as follows:
LMAML(fθ) = ∑ Ti∈B LQi(fθ′) = ∑ Ti∈B LQi(fθ−α∇θLSi (fθ)), (3)
Note that the meta-optimization is performed over the model parameters θ, whereas the objective is computed using the updated model parameters θ′. In effect, our proposed method aims to optimize the model parameters such that one or a small number of gradient steps on a new task will produce maximally effective behavior on that task.
The meta-optimization across tasks is performed via stochastic gradient descent (SGD) such that the model parameters θ are updated as follows:
θ ← θ − β∇θLMAML(fθ), (4)
where β is the meta step size.
During the inference stage, in order to perform predictions for newly observed task T∗ = {S∗,X∗} the loss function LS∗(fθ) is calculated first using eq. equation 2. Next, the parameters θ′∗ for task T∗ are calculated from eq. equation 1. The final predictions for query examples X∗ are performed by the model fθ′∗ , where for selected query example xq ∈ X∗ we have p(y|xq, θ ′ ∗) = fθ′∗(xq, θ ′ ∗).
The main limitation of the approach is that it produces the general weights for all possible tasks, and the adjustment is performed via a gradient-based approach performed on the support set. For some non-trivial challenging tasks, the dedicated parameters θ′∗ may be located far from the base weights, θ. Consequently, the adaptation procedure would require significantly more gradient steps, the training may be unstable, and the model will tend to overfit to support set. To overcome this limitation, we propose to replace the gradient-based adaption with the Hypernetwork approach, where the update step is returned by an additional deep model that extracts the information from the support set (see toy example in Fig. 3).
3.2 HYPERMAML - OVERVIEW
We introduce our HyperMAML – a model that utilizes Hypernetworks for modeling weights updates in the classical MAML algorithm. The main idea of the proposed updating scenario is to use information extracted from support examples and predictions given by universal weights to find optimal updates dedicated to a given task. Thanks to this approach, we can switch the classifier’s parameters between completely different tasks based on the support set and existing prediction of the universal weights.
The architecture of the HyperMAML is provided in Fig. 2. In this section we present the model for one-shot scenario, and further discuss how to extend it to Few-Shot problems. We aim at predicting the class distribution p(y|xq,S), assuming given single query example xq , and the set of support examples S. Following the idea from MAML we consider the parameterized function fθ, that models the discriminative distribution for the classes. In addition, in our architecture we distinguish the trainable encoding network E(·), that transforms data to low-dimensional representation. We postulate to calculate p(y|xq, θ′) = fθ′(eq), where eq is the query example xq transformed using encoder E(·), and θ′ represents the updated parameters for a considered task, θ′ = θ+∆θ. Compared to gradient-based adaptation step described by equation 1 used to calculate ∆θ we propose to predict the values using hypernetwork, directly from support set.
Each of the inputs from support set XS is transformed by Encoder E(·) in order to obtain lowdimensional matrix of embeddings ES = [eS,1, . . . , eS,K ]T. Next, the corresponding class labels for support examples, YS = [yS,1, . . . ,yS,K ]T are concatenated to the corresponding embeddings stored in the rows of matrix ES . In addition, we also calculate the predicted values for the examples from the support set using the general model fθ(ES) = ŶS , and also concatenate them to ES . The matrix transformed support inputs ES , together with true support labels YS , and corresponding predictions ŶS returned by general model are delivered as an input to the hypernetwork H(·) that returns the update ∆θ. The hypernetwork consists of fully-connected layers with ReLU activations – see section E.1 in the Appendix for details. The parameters for final target model are calculated with the following formula:
θ′ = θ +∆θ = θ +H(ES , ŶS ,YS). (5)
Practically, the Hypernetwork observes the support examples with the corresponding true values and decides how the global parameters θ should be adjusted to the considered task. In addition, the predictions from global model fθ are also delivered to the model in order to identify the misclassifications and try to correct them during the update state.
3.3 HYPERMAML - TRAINING
For training the model we assume that encoder E(·) is parametrized by γ, E := Eγ , and the hypernetwork H(·) by η, H := Hη. The training procedure is described in Algorithm 2. First, we sample the batch of tasks B from the given dataset D. Next, for each task Ti in batch B we calculate the update ∆θi using the support set Si, and provide the updated parameters θ′ according to the rule given by eq. equation 5. Finally, the objective to train the parameters of the system is calculated using the query sets from the batch tasks Ti:
LHyperMAML(fθ) = ∑ Ti∈B LQi(fθ′) = ∑ Ti∈B LQi(fθ+∆θ ), (6)
where LQi(fθ′) is given by eq. equation 2. The parameters of the encoder, hypernetwork, and global parameters θ represent the meta parameters of the system, and they are updated with stochastic gradient descent (SGD) by optimizing LHyperMAML(fθ).
Adaptation to the Few-Shot scenario. The proposed method can be easily extended for Few-Shot scenarios following the aggregation technique from (Sendera et al., 2022). For our approach, we aggregate the embedding values of the support examples from the same class using mean operation. In addition, the corresponding predictions within the class are also averaged and concatenated to the averaged per class embedding together with the true class label, and further processed via the hypernetwork.
Warming-up universal weights In practice, it is not trivial to initialize the universal weights of HyperMAML. Classical initialization does not allow to update the universal weights and only the Hypernetwork’s parameters are changing. To solve this problem we use a smooth transition from gradient to Hypernetwork update:
Algorithm 2 HyperMAML Require: D = {Tn}Nn=1: set of training tasks Require: β: step size hyper parameter
1: randomly initialize θ, γ, η 2: while not done do 3: Sample batch of tasks B from D 4: for each task Ti = {Si,Qi} from batch B do 5: Compute adapted parameters θ′i from Si using formula given by eq. equation 5 6: end for 7: Calculate the loss LHyperMAML(fθ) given by eq. equation 6. 8: θ ← θ − β∇θLHyperMAML(fθ) ▷ Update the global target parameters θ 9: η ← η − β∇ηLHyperMAML(fθ) ▷ Update parameters of the hypernetwork Hη 10: γ ← γ − β∇γLHyperMAML(fθ) ▷ Update the parameters of the Encoder Eγ
θ′ = θ + λ ·H(ES , ŶS ,YS)− (1− λ) · α∇θLSi(fθ) (7)
where λ is changing from zero to one in a few initial training epochs.
4 EXPERIMENTS
In the typical Few-Shot learning setting, making a valuable and fair comparison between proposed models is often complicated because of the existence of the significant differences in architectures and implementations of known methods. In order to limit the influence of the deeper backbone (feature extractor) architectures, we follow the unified procedure proposed by (Chen et al., 2019) 1.
In all of the reported experiments, the tasks consist of 5 classes (5-way) and 1 or 5 support examples (1 or 5-shot). Unless indicated otherwise, all compared models use a known and widely utilized backbone consisting of four convolutional layers (each consisting of a 2D convolution, a batch-norm layer, and a ReLU non-linearity; each layer consists of 64 channels (Chen et al., 2019). The models are trained from scratch, except for the models trained on mini-ImageNet where they are initialized with a pretrained backbone, following (Qiao et al., 2017; Rusu et al., 2019; Ye et al., 2021). In all experiments, the query set of each task consists of 16 samples for each class (80 in total). We split the datasets into the standard train, validation, and test class subsets, used commonly in the literature (Ravi & Larochelle, 2017; Chen et al., 2019; Patacchiola et al., 2020). We provide the additional training details in Section E of the Appendix.
4.1 CLASSIFICATION
First, we consider the classical Few-Shot learning scenario. We benchmark the performance of the HyperMAML and other methods on two challenging and widely considered datasets: Caltech-USCD Birds (CUB) (Wah et al., 2011) and mini-ImageNet (Ravi & Larochelle, 2017). In case of the mini-ImageNet dataset we initialize the backbone with pretrained weights, following (Qiao et al., 2017; Ye et al., 2021). We compare HyperMAML to a number of MAML-related and Hypernetworkbased methods, as well as the current state-of-the-art algorithms in the tasks of 1-shot and 5-shot classification, and report the results in Table 1. We report a comparison to a wider pool of few-shot learning methods, as well as the results of models utilizing a larger backbone in the Tables 6 and 7 in the Appendix.
In the 1-shot scenario, HyperMAML yields top performing results (66.11%) on the CUB dataset, inferior only to FEAT (Ye et al., 2021) (68.87%). On the mini-ImageNet dataset, HyperMAML is among the five best methods, achieving the accuracy of 53.41%. In the 5-shot setting, HyperMAML is among the top-3 best performing models achieving both on the CUB and mini-ImageNet datasets,
1An anonymized version of our code is available at https://anonymous.4open.science/r/ few-shot-hypernets-public-DB4F. We shall release the code with our experiments after the end of the review period.
achieving 78.89% and 68.76% accuracy, with FEAT (Ye et al., 2021) and HyperShot (Sendera et al., 2022) outperforming it by a small margin.
The obtained results show that HyperMAML achieves performance better or comparable to a variety of Few-Shot learning methods, in particular MAML (Finn et al., 2017), as well as techniques which derive from it (Antoniou et al., 2018; Rajeswaran et al., 2019; Fan et al., 2021; Yoon et al., 2018; Ye & Chao, 2021).
4.2 CROSS-DOMAIN ADAPTATION
In the cross-domain adaptation setting, the model is evaluated on tasks coming from a different distribution than the one it had been trained on. Therefore, such a task is more challenging than standard classification and is a plausible indicator of a model’s ability to generalize. In order to benchmark the performance of HyperMAML in cross-domain adaptation, we combine two datasets so that the training fold is drawn from the first dataset and validation and the testing fold – from another one. We report the results in Table 2. In the task of 1-shot Omniglot→EMNIST classification, HyperMAML achieves the second-best result (79.84%), with HyperShot+finetuning (80.65%) being the top one. Compared to the other methods, we observe relatively smaller performance growth as more data becomes available. In the 5-shot Omniglot→EMNIST classification task HyperMAML yields comparable results (89.22%) to HyperShot (Sendera et al., 2022) (90.81%) and DKT (Patacchiola et al., 2020) (90.30%), which are the state-of-the-art in this setting. In the most challenging task of mini-ImageNet→CUB classification, our method performs comparably to baseline methods such as MAML, ProtoNet and Matching Net (Finn et al., 2017; Vinyals et al., 2016; Snell et al., 2017), particularly in the 1-shot setting.
4.3 PARAMETER UPDATE MAGNITUDE
Next, we consider the ability of HyperMAML to produce the correct weight updates. One of the drawbacks of MAML is that, in practice, gradient updates change weights very slowly, especially when meta tasks require completely different weights (see Fig. 3). On the other hand, Hypernetworks can produce significant updates. To verify such behaviour on a real, non-trivial dataset, we calculate the norm of classifier weight updates in MAML and HyperMAML trained for Omniglot→EMNIST classification and report the results in Table 3.
As we can see, HyperMAML produces larger updates, which, combined with higher accuracy than MAML (see Table 2), suggests faster and more accurate convergence.
In the case of classical MAML, there exist few modifications in updating procedures (inner loop) like MAML++ (Antoniou et al., 2018) or Meta-SGD (Li et al., 2017). However, such modifications are required in classical MAML since few gradient updates do not always guarantee convergence. On the contrary, HyperMAML provides a possible solution to the problem by a novel update that is not generated directly by gradient descent, but rather by a forward pass of a Hypernetwork.
4.4 COMPUTATIONAL EFFICIENCY
Finally, we verify the hypothesis that HyperMAML offers an increased computational efficiency compared to MAML. To this end, we measure the times of processing the entire Omniglot→ EMNIST test dataset (600 tasks in total) by MAML with different numbers of gradient steps and HyperMAML and report the results in Table 4, and for other datasets in Table 8 in the Appendix.
We find that processing the test data with HyperMAML takes approximately the same time as using MAML with just 2 gradient updates. We also note that even given the budget of 100 gradient updates, MAML never matches the accuracy achieved by a single update generated by HyperMAML.
5 CONCLUSIONS
In this work, we introduced HyperMAML – a novel Meta-Learning algorithm strongly motivated by MAML (Finn et al., 2017). The crucial difference between the two methods lies in the fact that in HyperMAML the update of the model is given not by the gradient optimization, but by the trainable Hypernetwork. Consequently, HyperMAML is more computationally efficient than MAML, as during adaptation it performs only a single parameter update. Moreover, HyperMAML is not only more biologically feasible as it does not use backpropagation, but it can adapt easier to different settings and has a smaller number of hyperparameters. Our experiments show that HyperMAML outperforms the classical MAML in a number of standard Few-Shot learning benchmarks and achieves results better or comparable to various other state-of-the-art methods in most cases.
Our results indicate that Few-Shot weight adaptation can be performed directly, without calculating the actual loss or gradients – a more biologically plausible learning method (Lillicrap et al., 2020; Song et al., 2020; Whittington & Bogacz, 2019). Moreover, HyperMAML adapts to new tasks quicker than MAML, which needs to be tuned with many gradient updates. Thus, HyperMAML is a step toward more efficient and environment-friendly Meta-Learning techniques.
B ABLATION STUDY OF MECHANISMS USED IN HYPERMAML
In this section, we present two mechanisms we utilize when training HyperMAML.
Switching mechanism is a smooth transition between training the convolutional encoder through MAML and HyperMAML objective. We consider the MAML warm-up as a starting point of the training loop and then smoothly move towards HyperMAML training. During training, we define two "milestone" epochs (see Section E.3, between which the transition occurs. During the transition, we continuously decrease the participation of the MAML objective in the training process. It is done by multiplying MAML loss by p, ranging from 1.0 to 0.0 for a given number of epochs, and multiplying the HyperMAML loss by 1− p. Our motivation for this mechanism is to train a better universal optimizing the MAML objective during the warm-up part of the training and then Switch to the HyperMAML objective gradually.
Enhancement of the embeddings is another mechanism in our framework. When preparing the input to the Hypernetwork from the support set, we first obtain support embeddings from the encoder. Then, we forward those embeddings through the universal classifier and obtain its predictions for support examples. We concatenate those predictions to the support embeddings, as well as their respective ground-truth labels. The whole process is visualized in Figure 4. Our motivation to perform such an operation is to give the Hypernetwork the information about the current decision of the classifier, to better estimate the updates to its weights. We note that even though the Hypernetwork generates the weights of the classifier based on the enhanced support embeddings, the generated downstream classifier does not use enhancements when processing the query set. Instead, it only processes the raw embeddings obtained from the encoder (see Figure 2 from the main paper).
We perform an ablation study of both mechanisms on the task 5-shot 1-way Omniglot→EMNIST classification. The results, reported in Table 5, indicate that both mechanisms utilized individually improve the performance of HyperMAML, and the combination of the two yields the best results.
C FULL CLASSIFICATION RESULTS
We provide an expanded version of Table 1 from the main paper, with numerous additional baseline methods – see Table 6. We also report the performance of HyperMAML, as well as several baseline methods trained with ResNet-12 He et al. (2015) as backbone on the mini-ImageNet dataset in Table 7.
D COMPUTATIONAL EFFICIENCY – RESULTS ON NATURAL IMAGE DATASETS
In this section, we perform similar experiments to those described in Section 4.3 of the main paper and measure the inference time of HyperMAML and MAML with different numbers of gradient steps on CUB and mini-ImageNet datasets. We summarize the results in Table 4. Similarly to the benchmark performed on smaller images from the Omniglot→ EMNIST dataset, the inference time of HyperMAML is comparable to MAML with two gradient steps. Likewise, MAML never achieves accuracy higher than HyperMAML.
As opposed to Section 4.3 of the main paper, we report the accuracies of MAML only up to seven gradient steps. This is due to an insufficient amount of GPU memory available for making more steps in the MAML implementation we used (Chen et al., 2019). We also note that the accuracies of MAML reported here are significantly higher than the ones reported in Table 1 of the main paper. The MAML accuracies from that table were previously reported in the literature (Patacchiola et al., 2020), whereas the results in Table 4 have been obtained by the MAML implementation in our codebase (Chen et al., 2019).
E TRAINING DETAILS
In this section, we present details of the training and architecture overview.
E.1 ARCHITECTURE OVERVIEW
The architecture of HyperMAML consists of the following parts (as outlined in the Figure 2 in the main body of the work):
Encoder For each experiment described in the main body of this work, we utilize a shallow convolutional encoder (feature extractor), commonly used in the literature (Finn et al., 2017; Chen et al., 2019; Patacchiola et al., 2020). This encoder consists of four convolutional layers, each
consisting of a convolution, batch normalization, and ReLU nonlinearity. Each of the convolutional layers has an input and output size of 64, except for the first layer, where the input size is equal to the number of image channels. We also apply max-pooling between each convolution, by which the resolution of the processed feature maps is decreased by half. The output of the encoder is flattened to process it in the next layers.
For the mini-ImageNet dataset we additionally test the performance of HyperMAML with a larger backbone – namely ResNet-12 He et al. (2015).
Hypernetwork The Hypernetwork transforms the enhanced embeddings of the support examples of each class in a task into the updates for the portion of classifier weights predicting that class. It consists of two or three fully-connected layers with ReLU activation function between each consecutive pair of layers. In the hypernetwork, we use a hidden size of 256 or 512.
Classifier The universal classifier is a single fully-connected layer with the input size equal to the encoder embedding size and the output size equal to the number of classes. When using the strategy with embeddings enhancement, we freeze the classifier to get only the information about the behavior of the classifier, this means we do not calculate the gradient for the classifier in this step of the forward pass. Instead, gradient calculation for the classifier takes place during the classification of the query data.
E.2 TRAINING DETAILS
In all of the experiments described in the main body of this work, we utilize the switch and the embedding enhancement mechanisms. During training, we use the Adam optimizer (Kingma & Ba, 2014) and the MultiStepLR learning rate scheduler with the decay of 0.3 and learning rate starting from 0.01 or 0.001. We train HyperMAML for 4000 epochs on all the datasets, save for the simpler Omniglot→ EMNIST classification task, where we train for 2048 epochs instead. For the mini-ImageNet experiments we follow a strategy of using a pre-trained backbone, suggested by (Qiao et al., 2017; Rusu et al., 2019; Ye et al., 2021; Ye & Chao, 2021). More specifically, at the beginning of the training we initialize the Encoder of HyperMAML with weights of backbone pretrained for classification of all 64 classes from the mini-ImageNet training set. In practice, for consistency, we use the identical set of pretrained weights as Ye et al. (2021).
E.3 HYPERPARAMETERS
Below, we outline the hyperparameters of architecture and training procedures used in each experiment.
F IMPLEMENTATION DETAILS
We implement HyperMAML using the PyTorch framework (Paszke et al., 2019). We shall release the code publicly after the end of the review period. Each experiment described in this work was run on a single NVIDIA RTX 2080 GPU. | 1. What is the main contribution of the paper, and how does it compare to previous works in the field?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its computational efficiency and empirical advantages?
3. Do you have any concerns or questions regarding the technical novelty and applicability of the method?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors propose a hypernetwork-based parameter update scheme that replaces the gradient optimization procedure of MAML. This approach demonstrates decent empirical advantages over MAML in a generally more compute-efficient manner, comparable to the state-of-the-art on several selected benchmarks.
Strengths And Weaknesses
Strength:
This paper proposes a general and simple framework for MAML which can be applied to various MAML variants. The proposed method is simple and clear. They replace the gradient update of MAML with a trainable hypernetwork, which can reduce the computational burden of MAML caused by multiple optimization steps.
Presented experimental results show the benefit of HyperMAML on several benchmarks.
HYPERMAML is gradient-free in the inner loop.
Weakness
I tend to prefer simple solutions, but the technical novelty of the proposed framework seems a little lacking. The idea of hypernetwork has been widely used in various domains, and HyperMAML looks like a rather straightforward application of such a widely-used concept in meta-learning.
The algorithm does not solve the proposed problem. The authors argue that MAML does not suit the human learning process in that it does not exploit the gradient descent algorithm while classical MAML and its variant exploits gradient descent. Set aside the necessity of MAML which should consider the process of the human brain, HYPERMAML also exploits the gradient descent algorithm in the outer loop. In that perspective, HYPERMAML also is not similar to the human learning process. Also, in the inner loop, HYPERMAML only updates universal classifier
f
θ
, which is a single fully-connected neural net (E.1). Does updating a single fully-connecting neural network with HYPERNET reflect the learning process of humanity? I think more explanation is needed to show HYPERMAML mimics human learning process behavior. And I’m curious that HYPERMAML does not exploit the second-order optimization process since it has derivative
∇
θ
f
θ
′
. Isn’t it second-order derivative?
The empirical advantage of HyperMAML is obscure. Looking at Tab.1, the performance gap between HyperMAML and MAML++ is very narrow, in my opinion within the error range. Results in Tab.7 also support my skepticism towards the effectiveness of HyperMAML (compared to vanilla MAML).
The performance of the algorithm does not match to author’s arguments. Fig.1 Says that if a task varies a lot from the universal parameter
θ
, it is hard to adapt to a given task within a single task. If HYPERNET solved the problem, the performance improvement margin should be larger in Table 3, cross-domain environment. However, its performance does not improve a lot compared to other experimental settings. For example, HYPERMAML dropped the performance to 1% in 5-shot, mini-ImageNet to CUB. However, the performance grows by about 5% in other 5-shot settings. If its performance enhancement is based on the flexibility of adaptation, it should show better performance in the cross-domain section. Also, the comparison in Table 3 is unfair, since MAML not only updates the universal classifier but also the encoder. Also, using the gradient norm as the distance metric between functions is quite unconvincing.
Experiment setting is unfair Since HYPERMAML added an extra HYPERNET model to the MAML baseline, it exploits much more parameters and computations.
Clarity, Quality, Novelty And Reproducibility
The paper is written in clear language, mostly easy to follow. Their idea is very simple, incorporating a hypernetwork in meta-learning for universal parameter update but I personally believe the novelty of this paper is relatively small. They have provided an anonymized codebase, which is definitely desirable in terms of reproducibility. |
ICLR | Title
Discovering Diverse Multi-Agent Strategic Behavior via Reward Randomization
Abstract
We propose a simple, general and effective technique, Reward Randomization for discovering diverse strategic policies in complex multi-agent games. Combining reward randomization and policy gradient, we derive a new algorithm, RewardRandomized Policy Gradient (RPG). RPG is able to discover multiple distinctive human-interpretable strategies in challenging temporal trust dilemmas, including grid-world games and a real-world game Agar.io, where multiple equilibria exist but standard multi-agent policy gradient algorithms always converge to a fixed one with a sub-optimal payoff for every player even using state-of-the-art exploration techniques. Furthermore, with the set of diverse strategies from RPG, we can (1) achieve higher payoffs by fine-tuning the best policy from the set; and (2) obtain an adaptive agent by using this set of strategies as its training opponents. The source code and example videos can be found in our website: https://sites.google. com/view/staghuntrpg.
N/A
We propose a simple, general and effective technique, Reward Randomization for discovering diverse strategic policies in complex multi-agent games. Combining reward randomization and policy gradient, we derive a new algorithm, RewardRandomized Policy Gradient (RPG). RPG is able to discover multiple distinctive human-interpretable strategies in challenging temporal trust dilemmas, including grid-world games and a real-world game Agar.io, where multiple equilibria exist but standard multi-agent policy gradient algorithms always converge to a fixed one with a sub-optimal payoff for every player even using state-of-the-art exploration techniques. Furthermore, with the set of diverse strategies from RPG, we can (1) achieve higher payoffs by fine-tuning the best policy from the set; and (2) obtain an adaptive agent by using this set of strategies as its training opponents. The source code and example videos can be found in our website: https://sites.google. com/view/staghuntrpg.
1 INTRODUCTION
Games have been a long-standing benchmark for artificial intelligence, which prompts persistent technical advances towards our ultimate goal of building intelligent agents like humans, from Shannon’s initial interest in Chess (Shannon, 1950) and IBM DeepBlue (Campbell et al., 2002), to the most recent deep reinforcement learning breakthroughs in Go (Silver et al., 2017), Dota II (OpenAI et al., 2019) and Starcraft (Vinyals et al., 2019). Hence, analyzing and understanding the challenges in various games also become critical for developing new learning algorithms for even harder challenges.
Most recent successes in games are based on decentralized multi-agent learning (Brown, 1951; Singh et al., 2000; Lowe et al., 2017; Silver et al., 2018), where agents compete against each other and optimize their own rewards to gradually improve their strategies. In this framework, Nash Equilibrium (NE) (Nash, 1951), where no player could benefit from altering its strategy unilaterally, provides a general solution concept and serves as a goal for policy learning and has attracted increasingly significant interests from AI researchers (Heinrich & Silver, 2016; Lanctot et al., 2017; Foerster et al., 2018; Kamra et al., 2019; Han & Hu, 2019; Bai & Jin, 2020; Perolat et al., 2020): many existing works studied how to design practical multi-agent reinforcement learning (MARL) algorithms that can provably converge to an NE in Markov games, particularly in the zero-sum setting.
Despite the empirical success of these algorithms, a fundamental question remains largely unstudied in the field: even if an MARL algorithm converges to an NE, which equilibrium will it converge to? The existence of multiple NEs is extremely common in many multi-agent games. Discovering as many NE strategies as possible is particularly important in practice not only because different NEs can produce drastically different payoffs but also because when facing unknown players who are trained to play an NE strategy, we can gain advantage by identifying which NE strategy the opponent is playing and choosing the most appropriate response. Unfortunately, in many games where multiple distinct NEs exist, the popular decentralized policy gradient algorithm (PG), which has led to great successes in numerous games including Dota II and Stacraft, always converge to a particular NE with non-optimal payoffs and fail to explore more diverse modes in the strategy space.
Consider an extremely simple example, a 2-by-2 matrix game Stag-Hunt (Rousseau, 1984; Skyrms, 2004), where two pure strategy NEs exist: a “risky” cooperative equilibrium with the highest payoff ∗Equal contribution. † Work done as an intern at Institute for Interdisciplinary Information Sciences (IIIS), Tsinghua University.
for both agents and a “safe” non-cooperative equilibrium with strictly lower payoffs. We show, from both theoretical and practical perspectives, that even in this simple matrix-form game, PG fails to discover the high-payoff “risky” NE with high probability. The intuition is that the neighborhood that makes policies converge to the “risky” NE can be substantially small comparing to the entire policy space. Therefore, an exponentially large number of exploration steps are needed to ensure PG discovers the desired mode. We propose a simple technique, Reward Randomization (RR),
which can help PG discover the “risky” cooperation strategy in the stag-hunt game with theoretical guarantees. The core idea of RR is to directly perturb the reward structure of the multi-agent game of interest, which is typically low-dimensional. RR directly alters the landscape of different strategy modes in the policy space and therefore makes it possible to easily discover novel behavior in the perturbed game
(Fig. 1). We call this new PG variant Reward-Randomized Policy Gradient (RPG).
To further illustrate the effectiveness of RPG, we introduce three Markov games – two gridworld games and a real-world online game Agar.io. All these games have multiple NEs including both “risky” cooperation strategies and “safe” non-cooperative strategies. We empirically show that even with state-of-the-art exploration techniques, PG fails to discover the “risky” cooperation strategies. In contrast, RPG discovers a surprisingly diverse set of human-interpretable strategies in all these games, including some non-trivial emergent behavior. Importantly, among this set are policies achieving much higher payoffs for each player compared to those found by PG. This “diversityseeking” property of RPG also makes it feasible to build adaptive policies: by re-training an RL agent against the diverse opponents discovered by RPG, the agent is able to dynamically alter its strategy between different modes, e.g., either cooperate or compete, w.r.t. its test-time opponent’s behavior.
We summarize our contributions as follow
• We studied a collection of challenging multi-agent games, where the popular multi-agent PG algorithm always converges to a sub-optimal equilibrium strategy with low payoffs.
• A novel reward-space exploration technique, reward randomization (RR), for discovering hard-to-find equilibrium with high payoffs. Both theoretical and empirical results show that reward randomization substantially outperforms classical policy/action-space exploration techniques in challenging trust dilemmas.
• We empirically show that RR discovers surprisingly diverse strategic behaviors in complex Markov games, which further provides a practical solution for building an adaptive agent.
• A new multi-agent environment Agar.io, which allows complex multi-agent strategic behavior. We released the environment to the community as a novel testbed for MARL research.
2 A MOTIVATING EXAMPLE: STAG HUNT
Stag Hare Stag a, a c, b Hare b, c d, d
Table 1: The stag-hunt game, a > b ≥ d > c.
We start by analyzing a simple problem: finding the NE with the optimal payoffs in the Stag Hunt game. This game was originally introduced in Rousseau’s work, “A discourse on inequality” (Rousseau, 1984): a group of hunters are tracking a big stag silently; now a hare shows up, each hunter should decide whether to keep tracking the stag or kill the hare immediately. This leads to the 2-by-2 matrix-form stag-hunt game in Tab. 1
with two actions for each agent, Stag (S) and Hare (H). There are two pure strategy NEs: the Stag NE, where both agents choose S and receive a high payoff a (e.g., a = 4), and the Hare NE, where both agents choose H and receive a lower payoff d (e.g., d = 1). The Stag NE is “risky” because if one agent defects, they still receives a decent reward b (e.g., b = 3) for eating the hare alone while the other agent with an S action may suffer from a big loss c for being hungry (e.g., c = −10). Formally, let A = {S,H} denote the action space, πi(θi) denote the policy for agent i (i ∈ {1, 2}) parameterized by θi, i.e., P [πi(θi) = S] = θi and P [πi(θi) = H] = 1− θi, and R(a1, a2; i) denote the payoff for agent i when agent 1 takes action a1 and agent 2 takes action a2. Each agent i optimizes its expected utility Ui(π1, π2) = Ea1∼π1,a2∼π2 [R(a1, a2; i)]. Using the standard policy gradient algorithm, a typical learning procedure is to repeatedly take the following two steps until
convergence1: (1) estimate gradient ∇i = ∇Ui(π1, π2) via self-play; (2) update the policies by θi ← θi + α∇i with learning rate α. Although PG is widely used in practice, the following theorem shows in certain scenarios, unfortunately, the probability that PG converges to the Stag NE is low.
Theorem 1. Suppose a− b = (d− c) for some 0 < < 1 and initialize θ1, θ2 ∼ Unif [0, 1]. Then the probability that PG discovers the high-payoff NE is upper bounded by 2 + 2
1+2 + 2 .
Theorem 1 shows when the risk is high (i.e., c is low), then the probability of finding the Stag NE via PG is very low. Note this theorem applies to random initialization, which is standard in RL.
Remark: One needs at least N = Ω ( 1 ) restarts to ensure a constant success probability.
Fig. 2 shows empirical studies: we select 4 value assignments, i.e., c ∈ {−5,−20,−50,−100} and a=4, b=3, d=1, and run a state-of-the-art PG method, proximal policy optimization (PPO) (Schulman et al., 2017), on these games. The Stag NE is rarely reached, and, as c becomes smaller, the probability of finding the Stag NE significantly decreases. Peysakhovich & Lerer (2018b) provided a theorem of similar flavor without analyzing the dynamics of the learning algorithm whereas we explicitly characterize the behavior of PG. They studied a prosocial reward-sharing scheme, which transforms the reward of both agents toR(a1, a2; 1)+R(a1, a2; 2). Reward sharing can be viewed as a special case of our method and, as shown in Sec. 5, it is insufficient for solving complex temporal games.
2.1 REWARD RANDOMIZATION IN THE MATRIX-FORM STAG-HUNT GAME
9 Thm. 1 suggests that the utility function R highly influences what strategy PG might learn. Taking one step further, even if a strategy is difficult to learn with a particular R, it might be easier in some other function R′. Hence, if we can define an appropriate spaceR over different utility functions and draw samples from R, we may possibly discover desired novel strategies by running PG on some sampled utility function R′ and evaluating the obtained policy profile on the original game with R. We call this procedure Reward Randomization (RR).
Concretely, in the stag-hunt game, R is parameterized by 4 variables (aR, bR, cR, dR). We can define a distribution over R4, draw a tuple R′ = (aR′ , bR′ , cR′ , dR′) from this distribution, and run PG on R′. Denote the original stag-hunt game where the Stag NE is hard to discover as R0. Reward randomization draws N perturbed tuples R1, . . . , RN , runs PG on each Ri, and evaluates each of the obtained strategies on R0. The theorem below shows it is highly likely that the population of the N policy profiles obtained from the perturbed games contains the Stag NE strategy.
Theorem 2. For any Stag-Hunt game, suppose in the i-th run of RR we randomly generate aRi , bRi , cRi , dRi ∼ Unif [−1, 1] and initialize θ1, θ2 ∼ Unif [0, 1], then with probability at least 1− 0.6N = 1− exp (−Ω (N)), the aforementioned RR procedure discovers the high-payoff NE.
Here we use the uniform distribution as an example. Other distributions may also help in practice. Comparing Thm. 2 and Thm. 1, RR significantly improves standard PG w.r.t. success probability.
Remark 1: For the scenario studied in Thm. 1, to achieve a (1− δ) success probability for some 0 < δ < 1, PG requires at least N = Ω ( 1 log ( 1 δ )) random restarts. For the same scenario, RR only requires to repeat at most N = O (log (1/δ)) which is independent of . When is small, this is a huge improvement.
Remark 2: Thm. 2 suggests that comparing with policy randomization, perturbing the payoff matrix makes it substantially easier to discover a strategy that can be hardly reached in the original game.
Note that although in Stag Hunt, we particularly focus on the Stag NE that has the highest payoff for both agents, in general RR can also be applied to NE selection in other matrix-form games using a payoff evaluation functionE(π1, π2). For example, we can setE(π1, π2) = U1(π1, π2)+U2(π1, π2) for a prosocial NE, or look for Pareto-optimal NEs by setting E(π1, π2) = βU1(π1, π2) + (1 − β)U2(π1, π2) with 0 ≤ β ≤ 1.
1In general matrix games beyond stag hunt, the procedure can be cyclic as well (Singh et al., 2000).
Algorithm 1: RPG: Reward-Randomized Policy Gradient Input: original game M , search spaceR, evaluation function E, population size N ; draw samples {R(1), . . . , R(N)} fromR; {π(i)1 , π (i) 2 } ← PG on induced games {M(R(i))}i in parallel ; // RR phase select the best candidate π(k)1 , π (k) 2 by k = arg maxiE(π (i) 1 , π (i) 2 ) ; // evaluation phase π?1 , π ? 2 ← fine-tune π (k) 1 , π (k) 2 on M via PG (if necessary) ; // fine-tuning phase return π?1 , π?2 ;
3 RPG: REWARD-RANDOMIZED POLICY GRADIENT
Herein, we extend Reward Randomization to general multi-agent Markov games. We now utilize RL terminologies and consider the 2-player setting for simplicity. Extension to more agents is straightforward (Appx. B.3).
Consider a 2-agent Markov game M defined by (S,O,A, R, P ), where S is the state space; O = {oi : s ∈ S, oi = O(s, i), i ∈ {1, 2}} is the observation space, where agent i receives its own observation oi = O(s; i) (in the fully observable setting, O(s, i) = s); A is the action space for each agent; R(s, a1, a2; i) is the reward function for agent i; and P (s′|s, a1, a2) is transition probability from state s to state s′ when agent i takes action ai. Each agent has a policy πi(oi; θi) which produces a (stochastic) action and is parameterized by θi. In the decentralized RL framework, each agent i optimizes its expected accumulative reward Ui(θi) = Ea1∼π1,a2∼π2 [ ∑ t γ tR(st, at1, a t 2; i)] with some discounted factor γ.
Consider we run decentralized RL on a particular a Markov game M and the derived policy profile is (π1(θ1), π2(θ2)). The desired result is that the expected reward Ui(θi) for each agent i is maximized. We formally written this equilibrium evaluation objective as an evaluation function E(π1, π2) and therefore the goal is to find the optimal policy profile (π?1 , π ? 2) w.r.t. E. Particularly for the games we considered in this paper, since every (approximate) equilibrium we ever discovered has a symmetric payoff, we focus on the empirical performance while assume a much simplified equilibrium selection problem here: it is equivalent to define E(π1, π2) by E(π1, π2) = βU1(θ1) + (1− β)U2(θ2) for any 0 ≤ β ≤ 1. Further discussions on the general equilibrium selection problem can be found in Sec. 6. The challenge is that although running decentralized PG is a popular learning approach for complex Markov games, the derived policy profile (π1, π2) is often sub-optimal, i.e., there exists (π?1 , π ? 2) such that E(π?1 , π ? 2) > E(π1, π2). It will be shown in Sec. 5 that even using state-of-the-art exploration techniques, the optimal policies (π?1 , π ? 2) can be hardly achieved.
Following the insights from Sec. 2, reward randomization can be applied to a Markov game M similarly: if the reward function in M poses difficulties for PG to discover some particular strategy, it might be easier to reach this desired strategy with a perturbed reward function. Hence, we can then define a reward function spaceR, train a population of policy profiles in parallel with sampled reward functions from R and select the desired strategy by evaluating the obtained policy profiles in the original game M . Formally, instead of purely learning in the original game M = (S,O,A, R, P ), we define a proper subspace R over possible reward functions R : S × A × A → R and use M(R′) = (S,O,A, R′, P ) to denote the induced Markov game by replacing the original reward functionRwith anotherR′ ∈ R. To apply reward randomization, we drawN samplesR(1), . . . , R(N) from R, run PG to learn (π(i)1 , π (i) 2 ) on each induced game M(R
(i)), and pick the desired policy profile (π(k)1 , π (k) 2 ) by calculating E in the original game M . Lastly, we can fine-tune the policies π (k) 1 , π (k) 2 in M to further boost the practical performance (see discussion below). We call this learning procedure, Reward-Randomized Policy Gradient (RPG), which is summarized in Algo. 1.
Reward-function space: In general, the possible space for a valid reward function is intractably huge. However, in practice, almost all the games designed by human have low-dimensional reward structures based on objects or events, so that we can (almost) always formulate the reward function in a linear form R(s, a1, a2; i) = φ(s, a1, a2; i)Tw where φ(s, a1, a2; i) is a low-dimensional feature vector and w is some weight.
A simple and general design principle for R is to fix the feature vector φ while only randomize the weight w, i.e., R = {Rw : Rw(s, a1, a2; i) = φ(s, a1, a2; i)Tw, ‖w‖∞ ≤ Cmax}. Hence, the overall search space remains a similar structure as the original game M but contains a diverse range of preferences over different feature dimensions. Notably, since the optimal strategy is invariant to the scale of the reward function R, theoretically any Cmax > 0 results in the same search space.
However, in practice, the scale of reward may significantly influence MARL training stability, so we typically ensure the chosen Cmax to be compatible with the PG algorithm in use.
Note that a feature-based reward function is a standard assumption in the literature of inverse RL (Ng et al., 2000; Ziebart et al., 2008; Hadfield-Menell et al., 2017). In addition, such a reward structure is also common in many popular RL application domains. For example, in navigation games (Mirowski et al., 2016; Lowe et al., 2017; Wu et al., 2018), the reward is typically set to the negative distance from the target location LT to the agent’s location LA plus a success bonus, so the feature vector φ(s, a) can be written as a 2-dimensional vector [‖LT − LA‖2, I(LT = LA)]; in real-time strategy games (Wu & Tian, 2016; Vinyals et al., 2017; OpenAI et al., 2019), φ is typically related to the bonus points for destroying each type of units; in robotics manipulation (Levine et al., 2016; Li et al., 2020; Yu et al., 2019), φ is often about the distance between the robot/object and its target position; in general multi-agent games (Lowe et al., 2017; Leibo et al., 2017; Baker et al., 2020), φ could contain each agent’s individual reward as well as the joint reward over each team, which also enables the representation of different prosociality levels for the agents by varying the weight w.
Fine tuning: There are two benefits: (1) the policies found in the perturbed game may not remain an equilibrium in the original game, so fine-tuning ensures convergence; (2) in practice, fine-tuning could further help escape a suboptimal mode via the noise in PG (Ge et al., 2015; Kleinberg et al., 2018). We remark that a practical issue for fine-tuning is that when the PG algorithm adopts the actor-critic framework (e.g., PPO), we need an additional critic warm-start phase, which only trains the value function while keeps the policy unchanged, before the fine-tuning phase starts. This warm-start phase significantly stabilizes policy learning by ensuring the value function is fully functional for variance reduction w.r.t. the reward function R in the original game M when estimating policy gradients.
3.1 LEARNING TO ADAPT WITH DIVERSE OPPONENTS
Algorithm 2: Learning to Adapt Input: game M , policy set Π2, initial πa1 ; repeat
draw a policy π′2 from Π2; evaluate πa1 and π′2 on M and collect data; update θa via PG if enough data collected;
until enough iterations; return πa1 (θa);
In addition to the final policies π?1 , π ? 2 , another benefit from RPG is that the population of N policy profiles contains diverse strategies (more in Sec. 5). With a diverse set of strategies, we can build an adaptive agent by training with a random opponent policy sampled from the set per episode, so that the agent is forced to behave differently based on its opponent’s behavior. For simplicity, we consider learning an adaptive policy πa1 (θ
a) for agent 1. The procedure remains the same for agent 2. Suppose a policy population P = {π(1)2 , . . . , π (N) 2 } is obtained during the RR phase, we first construct a diverse strategy set Π2 ⊆ P that contains all the discovered behaviors from P . Then we construct a mixed strategy by randomly sampling a policy π′2 from Π2 in every training episode and run PG to learn πa1 by competing against this constructed mixed strategy. The procedure is summarized in Algo. 2. Note that setting Π2 = P appears to be a simple and natural choice. However, in practice, since P typically contains just a few strategic behaviors, it is unnecessary for Π2 to include every individual policy from P . Instead, it is sufficient to simply ensure Π2 contains at least one policy from each equilibrium in P (more details in Sec. 5.3). Additionally, this method does not apply to the one-shot game setting (i.e., horizon is 1) because the adaptive agent does not have any prior knowledge about its opponent’s identity before the game starts.
Implementation: We train an RNN policy for πa1 (θa). It is critical that the policy input does not directly reveal the opponent’s identity, so that it is forced to identify the opponent strategy through what it has observed. On the contrary, when adopting an actor-critic PG framework (Lowe et al., 2017), it is extremely beneficial to include the identity information in the critic input, which makes critic learning substantially easier and significantly stabilizes training. We also utilize a multi-head architecture adapted from the multi-task learning literature (Yu et al., 2019), i.e., use a separate value head for each training opponent, which empirically results in the best training performance.
4 TESTBEDS FOR RPG: TEMPORAL TRUST DILEMMAS
We introduce three 2-player Markov games as testbeds for RPG. All these games have a diverse range of NE strategies including both “risky” cooperative NEs with high payoffs but hard to discover and “safe” non-cooperative NEs with lower payoffs. We call them temporal trust dilemmas. Game descriptions are in a high level to highlight the game dynamics. More details are in Sec. 5 and App. B.
Gridworlds: We consider two games adapted from Peysakhovich & Lerer (2018b), Monster-Hunt (Fig. 3) and Escalation (Fig. 4). Both games have a 5-by-5 grid and symmetric rewards.
Monster-Hunt contains a monster and two apples. Apples are static while the monster keeps moving towards its closest agent. If a single agent meets the monster, it loses a penalty of 2; if two agents catch the monster together, they both earn a bonus of 5. Eating an apple always raises a bonus of 2. Whenever an apple is eaten or the monster meets an agent, the entity will respawn randomly. The optimal payoff can only be achieved when both agents precisely catch the monster simultaneously.
Escalation contains a lit grid. When two agents both step on the lit grid, they both get a bonus of 1 and a neighboring grid will be lit up in the next timestep. If only one agent steps on the lit grid, it gets a penalty of 0.9L, where L denotes the consecutive cooperation steps until that timestep, and the lit grid will respawn randomly. Agents need to stay together on the lit grid to achieve the maximum payoff despite of the growing penalty. There are multiple NEs: for each L, that both agents cooperate for L steps and then leave the lit grid jointly forms an NE.
Agar.io is a popular multiplayer online game. Players control cells in a Petri dish to gain as much mass as possible by eating smaller cells while avoiding being eaten by larger ones. Larger cells move slower. Each player starts with one cell but can split a sufficiently large cell into two, allowing them to control multiple cells (Wikipedia, 2020). We consider a simplified scenario (Fig. 5) with 2 players (agents) and tiny script cells, which automatically runs away when an agent comes by. There is a low-risk non-cooperative strategy, i.e., two agents stay away from each other and hunt script cells independently. Since the script cells move faster, it is challenging for a single agent to hunt them. By contrast, two agents can cooperate to encircle the script cells to accelerate hunting. However, cooperation is extremely risky for the agent with less mass: two agents need to stay close to cooperate but the larger agent may defect by eating the smaller one and gaining an immediate big bonus.
5 EXPERIMENT RESULTS
In this section, we present empirical results showing that in all the introduced testbeds, including the real-world game Agar.io, RPG always discovers diverse strategic behaviors and achieves an equilibrium with substantially higher rewards than standard multi-agent PG methods. We use PPO (Schulman et al., 2017) for PG training. Training episodes for RPG are accumulated over all the perturbed games. Evaluation results are averaged over 100 episodes in gridworlds and 1000 episodes in Agar.io. We repeat all the experiments with 3 seeds and use X (Y ) to denote mean X with standard deviation Y in all tables. Since all our discovered (approximate) NEs are symmetric for both players, we simply take E(π1, π2) = U1(π1, π2) as our evaluation function and only measure the reward of agent 1 in all experiments for simplicity. More details can be found in appendix.
5.1 GRIDWORLD GAMES
Monster-Hunt: Each agent’s reward is determined by three features per timestep: (1) whether two agents catch the monster together; (2) whether the agent steps on an apple; (3) whether the agent meets the monster alone. Hence, we write φ(s, a1, a2; i) as a 3-dimensional 0/1 vector with one dimension for one feature. The original game corresponds to w = [5, 2,−2]. We set Cmax = 5 for sampling w. We compare RPG with a collection of baselines, including standard PG (PG), PG with shared reward (PG+SR), population-based training (PBT), which trains the same amount of parallel PG policies as RPG, as
well as popular exploration methods, i.e., count-based exploration (PG+CNT) (Tang et al., 2017) and MAVEN (Mahajan et al., 2019). We also consider an additional baseline, DIAYN (Eysenbach et al., 2019), which discovers diverse skills using a trajectory-based diversity reward. For a fair comparison, we use DIAYN to first pretrain diverse policies (conceptually similar to the RR phase), then evaluate the rewards for every pair of obtained policies to select the best policy pair (i.e., evaluation phase, shown with the dashed line in Fig. 6), and finally fine-tune the selected policies until convergence (i.e., fine-tuning phase). The results of RPG and the 6 baselines are summarized in Fig. 6, where RPG consistently discovers a strategy with a significantly higher payoff. Note that the strategy with the optimal payoff may not always directly emerge in the RR phase, and there is neither a particular value of w constantly being the best candidate: e.g., in the RR phase, w = [5, 0, 2] frequently produces a sub-optimal cooperative strategy (Fig. 7(a)) with a reward lower than other w values, but it can also occasionally lead to the optimal strategy (Fig. 7(b)). Whereas, with the fine-tuning phase, the overall procedure of RPG always produces the optimal solution. We visualize both two emergent cooperative strategies in Fig. 7: in the sub-optimal one (Fig. 7(a)), two agents simply move to grid (1,1) together, stay still and wait for the monster, while in the optimal one (Fig. 7(b)), two agents meet each other first and then actively move towards the monster jointly, which further improves hunting efficiency.
Escalation: We can represent φ(s, a1, a2; i) as 2-dimensional vector containing (1) whether two agents are both in the lit grid and (2) the total consecutive cooperation steps. The original game corresponds to w = [1,−0.9]. We set Cmax = 5 and show the total number of cooperation steps per episode for several selected w values throughout training in Fig. 8, where RR is able to discover different NE strategies. Note that w = [1, 0] has already produced the strategy with the optimal payoff in this game, so the fine-tuning phase is no longer needed.
5.2 2-PLAYER GAMES IN Agar.io
There are two different settings of Agar.io: (1) the standard setting, i.e., an agent gets a penalty of −x for losing a mass x, and (2) the more challenging aggressive setting, i.e., no penalty for mass loss. Note in both settings: (1) when an agent eats a mass x, it always gets a bonus of x; (2) if an agent loses all the mass, it immediately dies while the other agent can still play in the game. The aggressive setting promotes agent interactions and typically leads to more diverse strategies in practice. Since both settings strictly define the penalty function for mass loss, we do not randomize this reward term. Instead, we consider two other factors: (1) the bonus for eating the other agent; (2) the prosocial level of both agents. We use a 2-dimensional vector w = [w0, w1], where 0 ≤ w0, w1 ≤ 1, to denote a particular reward function such that (1) when eating a cell of mass x from the other agent, the bonus is w0 × x, and (2) the final reward is a linear interpolation between R(·; i) and 0.5(R(·; 0) +R(·; 1)) w.r.t. w1, i.e., when w1 = 0, each agent optimizes its individual reward while when w1 = 1, two agents have a shared reward. The original game in both Agar.io settings corresponds to w = [1, 0].
Standard setting: PG in the original game (w = [1, 0]) leads to a typical trust-dilemma dynamics: the two agents first learn to hunt and occasionally Cooperate (Fig. 9(a)), i.e., eat a script cell with the other agent close by; then accidentally one agent Attacks the other agent (Fig. 9(b)), which yields a big
PBT w=[0.5, 1] w=[0, 1] w=[0, 0] RPG RND
Rew. 3.3(0.2) 4.8(0.6) 5.1(0.4) 6.0(0.5) 8.9(0.3) 3.2(0.2) #Attack 0.4(0.0) 0.7(0.2) 0.3(0.1) 0.5(0.1) 0.9(0.1) 0.4(0.0) #Coop. 0.0(0.0) 0.6(0.6) 2.3(0.3) 1.6(0.1) 2.0(0.2) 0.0(0.0) #Hunt 0.7(0.1) 0.6(0.3) 0.3(0.0) 0.7(0.0) 0.9(0.1) 0.7(0.0)
Table 3: Results in the aggressive setting of Agar.io: PBT: population training of parallel PG policies; RR: w=[0, 0] is the best candidate via RR; RPG: fine-tuned policy; RND: PG with RND bonus.
immediate bonus and makes the policy aggressive; finally policies converge to the non-cooperative equilibrium where both agents keep apart and hunt alone. The quantitative results are shown in Tab. 2. Baselines include population-based training (PBT) and a state-the-art exploration method for high-dimensional state, Random Network Distillation (RND) (Burda et al., 2019). RND and PBT occasionally learns cooperative strategies while RR stably discovers a cooperative equilibrium with w = [1, 1], and the full RPG further improves the rewards. Interestingly, the best strategy obtained in the RR phase even has a higher Cooperate frequency than the full RPG: fine-tuning transforms the strong cooperative strategy to a more efficient strategy, which has a better balance between Cooperate and selfish Hunt and produces a higher average reward.
Aggressive setting: Similarly, we apply RPG in the aggressive setting and show results in Tab. 3. Neither PBT nor RND was able to find any cooperative strategies in the aggressive game while RPG stably discovers a cooperative equilibrium with a significantly higher reward. We also observe a diverse set of complex strategies in addition to normal Cooperate and Attack. Fig. 10 visualizes the Sacrifice strategy derived with w = [1, 1]: the smaller agent rarely hunts script cells; instead, it waits in the corner for being eaten by the larger agent to contribute all its mass to its partner. Fig. 11 shows another surprisingly novel emergent strategy by w = [0.5, 1]: each agent first hunts individually to gain enough mass; then one agent splits into smaller cells while the other agent carefully eats a portion of the split agent; later on, when the agent who previously lost mass gains sufficient mass, the larger agent similarly splits itself to contribute to the other one, which completes the (ideally) never-ending loop of partial sacrifice. We name this strategy Perpetual for its conceptual similarity to the perpetual motion machine. Lastly, the best strategy is produced by w = [0, 0] with a balance between Cooperate and Perpetual: they cooperate to hunt script cells to gain mass efficiently and quickly perform mutual sacrifice as long as their mass is sufficiently large for split-and-eat. Hence, although the RPG policy has relatively lower Cooperate frequency than the policy by w = [0, 1], it yields a significantly higher reward thanks to a much higher Attack (i.e., Sacrifice) frequency.
5.3 LEARNING ADAPTIVE POLICIES
Monster-Hunt: We select policies trained by 8 differentw values in the RR phase and use half of them for training the adaptive policy and the remaining half as hidden opponents for evaluation. We also make sure that both training and evaluation policies cover the following 4 strategy modes: (1) M(onster): the agent always moves towards the monster; (2) M(onster)-Alone: the agent moves towards the monster but
also tries to keeps apart from the other agent; (3) M(onster)-Coop.: the agent seeks to hunt the monster together with the other agent; (4) Apple: the agent only eats apple. The evaluation results are shown in Tab. 4, where the adaptive policy successfully exploits all the test-time opponents, including M(onster)-Alone, which was trained to actively avoids the other agent.
Agent Adapt. Coop. Comp. Opponent: Cooperative −→ Competitive #Attack 0.2(0.0) 0.3(0.0) 0.1(0.1)
Rew. 0.7(0.7) -0.2(0.6) 0.8(0.5) Opponent: Competitive −→ Cooperative #Coop. 1.0(0.3) 1.4(0.4) 0.3(0.4) Rew. 2.5(0.7) 3.6(1.2) 1.1(0.7)
Table 5: Adaptation test in Agar.io. Opponent type is switched half-way per episode. #Attack, #Coop.: episode statistics; Rew.: agent reward. Adaptive agents’ rewards are close to oracles.
Agar.io: We show the trained agent can choose to cooperate or compete adaptively in the standard setting. We pick 2 cooperative policies (i.e., Cooperate preferred, w=[1, 0]) and 2 competitive policies (i.e., Attack preferred, w=[1, 1]) and use half of them for training and the other half for testing. For a hard challenge at test time, we switch the opponent within an episode, i.e., we use a cooperative opponent in the first half and then immediately switch to a competitive one, and vice versa. So, a desired policy should adapt quickly at halftime. Tab. 5 compares the second-half behavior of the adaptive agent with the oracle pure-competitive/cooperative agents. The rewards of the adaptive agent is close to the oracle: even with half-way switches, the trained policy is
able to exploit the cooperative opponent while avoid being exploited by the competitive one.
6 RELATED WORK AND DISCUSSIONS
Our core idea is reward perturbation. In game theory, this is aligned with the quantal response equilibrium (McKelvey & Palfrey, 1995), a smoothed version of NE obtained when payoffs are perturbed by a Gumbel noise. In RL, reward shaping is popular for learning desired behavior in various domains (Ng et al., 1999; Babes et al., 2008; Devlin & Kudenko, 2011), which inspires our idea for finding diverse strategic behavior. By contrast, state-space exploration methods (Pathak et al., 2017; Burda et al., 2019; Eysenbach et al., 2019; Sharma et al., 2020) only learn low-level primitives without strategy-level diversity (Baker et al., 2020).
RR trains a set of policies, which is aligned with the population-based training in MARL (Jaderberg et al., 2017; 2019; Vinyals et al., 2019; Long et al., 2020; Forestier et al., 2017). RR is conceptually related to domain randomization (Tobin et al., 2017) with the difference that we train separate policies instead of a single universal one, which suffers from mode collapse (see appendix D.2.3). RPG is also inspired by the map-elite algorithm (Cully et al., 2015) from evolutionary learning community, which optimizes multiple objectives simultaneously for sufficiently diverse polices. Our work is also related to Forestier et al. (2017), which learns a set of policies w.r.t. different fitness functions in the singleagent setting. However, they only consider a restricted fitness function class, i.e., the distance to each object in the environment, which can be viewed as a special case of our setting. Besides, RPG helps train adaptive policies against a set of opponents, which is related to Bayesian games (Dekel et al., 2004; Hartline et al., 2015). In RL, there are works on learning when to cooperate/compete (Littman, 2001; Peysakhovich & Lerer, 2018a; Kleiman-Weiner et al., 2016; Woodward et al., 2019; McKee et al., 2020), which is a special case of ours, or learning robust policies (Li et al., 2019; Shen & How, 2019; Hu et al., 2020), which complements our method.
Although we choose decentralized PG in this paper, RR can be combined with any other multi-agent learning algorithms for games, such as fictitious play (Robinson, 1951; Monderer & Shapley, 1996; Heinrich & Silver, 2016; Kamra et al., 2019; Han & Hu, 2019), double-oracle (McMahan et al., 2003; Lanctot et al., 2017; Wang et al., 2019; Balduzzi et al., 2019) and regularized self-play (Foerster et al., 2018; Perolat et al., 2020; Bai & Jin, 2020). Many of these works have theoretical guarantees to find an (approximate) NE but there is little work focusing on which NE strategy these algorithms can converge to when multiple NEs exist, e.g., the stag-hunt game and its variants, for which many learning dynamics fail to converge to a prevalence of the pure strategy Stag (Kandori et al., 1993; Ellison, 1993; Fang et al., 2002; Skyrms & Pemantle, 2009; Golman & Page, 2010)..
In this paper, we primarily focus on how reward randomization empirically helps MARL discover better strategies in practice and therefore only consider stag hunt as a particularly challenging example where an “optimal” NE with a high payoff for every agent exists. In general cases, we can select a desired strategy w.r.t. an evaluation function. This is related to the problem of equilibrium refinement (or equilibrium selection) (Selten, 1965; 1975; Myerson, 1978), which aims to find a subset of equilibria satisfying desirable properties, e.g., admissibility (Banks & Sobel, 1987), subgame perfection (Selten, 1965), Pareto efficiency (Bernheim et al., 1987) or robustness against opponent’s deviation from best response in security-related applications (Fang et al., 2013; An et al., 2011).
ACKNOWLEDGMENTS
This work is supported by National Key R&D Program of China (2018YFB0105000). Co-author Fang is supported, in part, by a research grant from Lockheed Martin. Co-author Wang is supported, in part, by gifts from Qualcomm and TuSimple. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the funding agencies. The authors would like to thank Zhuo Jiang and Jiayu Chen for their support and input during this project. Finally, we particularly thank Bowen Baker for initial discussions and suggesting the Stag Hunt game as our research testbed, which eventually leads to this paper.
A PROOFS
Proof of Theorem 1. We apply self-play policy gradient to optimize θ1 and θ2. Here we consider a projected version, i.e., if at some time t, θ1 or θ2 6∈ [0, 1], we project it to [0, 1] to ensure it is a valid distribution.
We first compute the utility given a pair (θ1, θ2)
U1(θ1, θ2) =aθ1θ2 + cθ1(1− θ2) + b(1− θ1)θ2 + d(1− θ1)(1− θ2) U2(θ1, θ2) =aθ1θ2 + bθ1(1− θ2) + c(1− θ1)θ2 + d(1− θ1)(1− θ2).
We can compute the policy gradient
∇U1(θ1, θ2) =aθ2 + c(1− θ2)− bθ2 − d(1− θ2) = (a+ d− b− c)θ2 + c− d ∇U2(θ1, θ2) =aθ2 − bθ1 + c(1− θ1)− d(1− θ1) = (a+ d− b− c)θ1 + c− d
Recall in order to find the optimal solution both θ1 and θ2 need to increase. Also note that the initial θ1 and θ2 determines the final solution. In particular, only if θ1 and θ2 are increasing at the beginning, they will converge to the desired solution.
To make either θ1 or θ2 increase, we need to have
(a+ d− b− c)θ1 + c− d > 0 or (a+ d− b− c)θ2 + c− d > 0 (1)
Consider the scenario a− b = (d− c). In order to make Inequality equation 1 to hold, we need at least either θ1, θ2 ≥ 11+ .
If we initialize θ1 ∼ [0, 1] and θ2 ∼ [0, 1], the probability of either θ1, θ2 ≥ 11+ is 1− ( 1 1+ )2 =
2 + 2
1+2 + 2 = O ( ).
Proof of Theorem 2. Using a similar observation as in Theorem 1, we know a necessary condition to make PG converge to a sub-optimal NE is
(a+ d− b− c)θ1 + c− d < 0 or (a+ d− b− c)θ2 + c− d < 0.
Based on our generating scheme on a, b, c, d and the initialization scheme on θ1, θ2, we can verify that Therefore, via a union bound, we know
P ((a+ d− b− c)θ1 + c− d < 0 or (a+ d− b− c)θ2 + c− d < 0) ≤ 0.6. (2)
Since each round is independent, the probability that PG fails for all N times is upper bounded by 0.6N . Therefore, the success probability is lower bounded by 1− 0.6N = 1− exp (−Ω (N)).
B ENVIRONMENT DETAILS
B.1 Iterative Stag-Hunt
In Iterative Stag-Hunt, two agents play 10 rounds, that is, both PPO’s trajectory length and episode length are 10. Action of each agent is a 1-dimensional vector, ai = {ti, i ∈ {0, 1}}, where ti = 0 denotes taking Stag action and ti = 1 denotes taking Hare action. Observation of each agent is actions taking by itself and its opponent in the last round, i.e., ori = {a r−1 i , a r−1 1−i ; i ∈ {0, 1}}, where r denotes the playing round. Note that neither agent has taken action at the first round, so the observation oi = {−1,−1}.
B.2 Monster-Hunt
In Monster-Hunt, two agents can move one step in any of the four cardinal directions (Up, Down, Left, Right) at each timestep. Let ai = {ti, i ∈ {0, 1}} denote action of agent i, where ti is a discrete 4-dimensional one-hot vector. The position of each agent can not exceed the border of 5-by-5 grid, where action execution is invalid. One Monster and two apples respawn in the different grids at the initialization. If an agent eats (move over in the grid world) an apple, it can gain 2 points. Sometimes, two agents may try to eat the same apple, the points will be randomly assigned to only one agent. Catching the monster alone causes an agent lose 2 points, but if two agents catch the stag simultaneously, each agent can gain 5 points. At each time step, the monster and apples will respawn randomly elsewhere in the grid world if they are wiped. In addition, the monster chases the agent closest to it at each timestep. The monster may move over the apple during the chase, in this case, the agent will gain the sum of points if it catches the monster and the apple exactly. Each agent’s observation oi is a 10-dimensional vector and formed by concatenating its own position pi, the other agent’s position p1−i, monster’s positionpmonster and sorted apples’ position papple0, papple1, i.e., oi = {pi, p1−i, pmonster, papple0, papple1; i ∈ {0, 1}}, where p = (u, v) denotes the 2-dimensional coordinates in the gridworld.
B.3 Monster-Hunt WITH MORE THAN 2 AGENTS
Here we consider extending RPG to the general setting of N agents. In most of the multi-agent games, the reward function are fully symmetric for the same type of agents. Hence, as long as we can formulate the reward function in a linear form over a feature vector and a shared weight, i.e., R(s, a1, . . . , aN ; i) = φ(s, a1, . . . , aN ; i)
Tw, we can directly apply RPG without any modification by setting R = {Rw : Rw(s, a1, . . . , aN ; i) = φ(s, a1, . . . , aN ; i)Tw}. Note that typically the dimension of the feature vector φ(·) remains fixed w.r.t. different number of agents (N ). For example, in the Agar.io game, no matter how many players are there in the game, the rule of how to get reward bonus and penalties remains the same.
Here, we experiment RPG in Monster-Hunt with 3 agents. The results are shown in Fig. 12. We consider baselines including the standard PG (PG) and population-based training (PBT). RPG reliably discovers a strong cooperation strategy with a substantially higher reward than the baselines.
B.4 Escalation
In Escalation, two agents appear randomly and one grid lights up at the initialization. If two agents step on the lit grid simultaneously, each agent can gain 1 point, and the lit grid will go out with an adjacent grid lighting up. Both agents can gain 1 point again if they step on the next lit grid together. But if one agent steps off the path, the other agent will lose 0.9L points, where L is the current length of stepping together, and the game is over. Another option is that two agents choose to step off the path simultaneously, neither agent will be punished, and the game continues. As the length L of stepping together increases, the cost of betrayal increases linearly. ai = {ti, i ∈ {0, 1}} denotes
action of agent i, where ti is a discrete 4-dimensional one-hot vector. The observation ai of agent i is composed of its own position pi, the other agent’s positionp1−i and the lit grid’s position plit, i.e., oi = {pi, p1−i, plit; i ∈ {0, 1}}, where p = (u, v) denotes the 2-dimensional coordinates in the gridworld. Moreover, we utilize GRU to encode the length L implicitly, instead of observing that explicitly.
B.5 Agar.io
In the original online game Agar.io, multiple players are limited in a circle petri dish. Each player controls one or more balls using only a cursor and 2 keyboard keys "space" and "w". all balls belonging to the player will move forward to where the cursor pointing at. Balls larger than a threshold will split to 2 smaller balls and rush ahead when the player pressing the key "space". Balls larger than another threshold will emit tiny motionless food-like balls when the player pressing "w". Agar.io has many play modes like "Free-For-All" mode (All players fight for their own and can eat each other) and "Team" mode (Players are separated to two groups. They should cooperate with other players in the same group and eat other players belonging to another group).
We simplified settings of the original game Agar.io: Now agents don’t need to emit tiny motionless balls and all fight with each other (FFA mode). The action space of the game is target × {split, no_split}. target ∈ [0, 1]2 means the target position that all balls belonging to the agent move to. binary action split or no_split means whether the player chooses to split, which will cause all balls larger than a threshold split to 2 smaller ones and rush ahead for a short while. These split balls will re-merge after some time, then the agent can split again. When one agent’s ball meets another agent’s ball and the former one is at least 1.2 times larger than the later, the later will be eaten and the former will get all its mass. The reward is defined as the increment of balls’ mass. So every agent’s goal is getting larger by eating others while avoid being eaten. But larger ball moves slower. So it’s really hard to catch smaller balls only by chasing after it. Split will help, but it needs high accuracy to rush to the proper direction. In our experiments, there were 7 agents interacting with each other. 2 agents were learned by our algorithm and would quit the game if all balls were eaten. 5 agents were controlled by a script and would reborn at a random place if all balls were eaten. Learn-based agents were initialized larger than script-based agents so it was basically one-way catching. In this setting, cooperation was the most efficient behavior for learn-based agents to gain positive reward, where they coordinated to surround script-based agents and caught them.
Observation space: We denote partial observation of agent i as oi, which includes global information of the agent (denoted as oi,global) and descriptions of all balls around the agent (including balls owned by the agent, denoted as oi,balls. and oi,balls = {oi,ball,1, oi,ball,2, ..., oi,ball,m}, where oi,ball,j denotes the j-th ball around the agent and there are m observed balls in all). oi,global = {li,obs, wi,obs, pi,center, vi, si,alive, ni,own, ni,script, ni,other, ai,last, ri,max, ri,min,mi} where li,obs, wi,obs (they are both 1D filled with a real number, from here the form like (1D, real) will be used as the abbreviation) are the length and width of the agent’s observation scope, pi,center (2D, real) is its center position, vi (2D, real) is the speed of its center, si,alive(1D, binary) is whether the other learn-based agent is killed, ni,own, ni,script, ni,other(1D, real) are numbers of each type of balls nearby (3 types: belonging to me, or belonging to a script agent, or belonging to another learn-based agent), ai,last(3D, real) is the agent’s last action, ri,max, ri,min(1D, real) are maximal and minimal radius of all balls belonging to the agent. for any j = 1, 2, ...,m, oi,ball,j = {pi,j,relative, pi,j,absolute, vi,j , vi,j,rush, ri,j , log(ri,j), di,j , ei,j,max, ei,j,min, si,j,rem, ti,j}, where pi,j,relative, pi,j,absolute(2D, real) are the ball’s relative and absolute position, vi,j is its speed, vi,j,rush is the ball’s additional rushing speed(when a ball splits to 2 smaller balls, these 2 balls will get additional speed and it’s called vi,j,rush, otherwise vi,j,rush = 0), ri,j(1D, real) is its radius, di,j is the distance between the ball and the center of the agent, ei,j,max, ei,j,min(1D, binary) are whether the ball can be eaten by the maximal or minimal balls of the observing agent, si,j,rem(1D, binary) is whether the ball is able to remerge at present. ti,j(3D, one hot) is the type of the ball.
The script-base agent can automatically chase after and split towards other smaller agents. When facing extreme danger (we define "extreme danger" as larger learn-based agents being very close to it), it will use a 3-step deep-first-search to plan a best way for escape. More details of the script can be seen in our code. We played against the script-base agent using human intelligence for many times and we could never hunt it when having only one ball and rarely catch it by split.
C TRAINING DETAILS
C.1 GRIDWORLD GAMES
In Monster-Hunt and Escalation, agents’ networks are organized by actor-critic (policy-value) architecture. We consider N = 2 agents with a policy profile π = {π0, π1} parameterized by θ = {θ0, θ1}. The policy network πi takes observation oi as input, two hidden layers with 64 units are followed after that, and then outputs action ai. While the value network takes as input observations of two agents, o = {o0, o1} and outputs the V-value of agent i, similarly two hidden layers with 64 units are added before the output.
In Escalation, we also place an additional GRU module before the output in policy network and value network respectively, to infer opponent’s intentions from historical information. Note that 64-dimensional hidden state of GRU h will change if the policy network is updated. In order to both keep forward information and use backward information to compute generalized advantage estimate (GAE) with enough trajectories, we split buffer data into small chunks, e.g., 10 consecutive timesteps as a small data chunk. The initial hidden state hinit, which is the first hidden state h0, is kept for each data chunk, but do another forward pass to re-compute {h1, ..., hM−1}, where M represents the length of one data chunk, and keep buffer-reuse low, e.g., 4 in practice.
Agents in Monster-Hunt and Escalation are trained by PPO with independent parameters. Adam optimizer is used to update network parameters and each experiment is executed for 3 times with random seeds. More optimization hyper-parameter settings are in Tab.6. In addition, Monster-Hunt also utilizes GRU modules to infer opponent’s identity during adaption training and the parallel threads are set to 64.
Count-based exploration: We just add the count-based exploration intrinsic reward rint to the environment reward during training. when the agent’s observation is o, rint = α/no where α is a hyperparameter adjusted properly (0.3 in Monster-Hunt and 1 in Escalation) and no is the number of times the agent have the observation o.
DIAYN: In Monster-Hunt, we use DIAYN to train 10 diverse policy in the first 140k episodes (DIAYN’s discriminator has 3 FC layers with 256, 128, 10 units respectively) and choose the policy which has the best performance in Monster-Hunt’s reward settings to fine-tune in the next 280k episodes. Note that DIAYN doesn’t have a warm-start phase before fine-tuning in its original paper so we didn’t do so as well. Note that in the first unsupervised learning phase, DIAYN does not optimize for any specific reward function. Hence, we did not plot the reward curve for DIAYN in Fig.7 for this phase. Instead, we simply put a dashed line showing the reward of the best selected pair of policies from DIAYN pretraining.
MAVEN: We use the open-sourced implementation of MAVEN from https://github.com/ AnujMahajanOxf/MAVEN.
Population-based training: In each PBT trial, we straightforward train the same amount of parallel PG policies as RPG with different random seeds in each problem respectively and choose the one with best performance as the final policy. Note that the final training curve is averaged over 3 PBT trials.
C.2 Agar.io
In Agar.io, we used PPO as our algorithm and agents’ networks were also organized by actor-critic (policy-value) architecture with a GRU unit (i.e., PPO-GRU). We consider N = 2 agents with a policy profile π = {π0, π1} sharing parameter θ. The policy network πi takes observation oi as input. At the beginning, like (Baker et al., 2019), oi,balls is separated to 3 groups according to balls’ types: oi,ownballs, oi,scriptballs and oi,otherballs. 3 different multi-head attention models with 4 heads and 64 units for transformation of keys, inquiries and values are used to embed information of 3 types of balls respectively, taking corresponding part of oi,balls as values and inquiries and oi,global as keys. Then their outputs are concatenated and transformed by an FC layer with 128 units before being sent to a GRU block with 128 units. After that, the hidden state is copied to 2 heads for policy’s and value’s output. The policy head starts with 2 FC layers both with 128 units and ends with 2 heads to generate discrete(split or no_split) and continuous(target) actions. The value head has 3 FC layers with 128, 128, 1 unit respectively and outputs a real number.
PPO-GRU was trained with 128 parallel environment threads. Agar.io’s episode length was uniformrandomly sampled between 300 and 400 both when training and evaluating. Buffer data were split to small chunks with length = 32 in order to diversify training data and stabilize training process. and the buffer was reused for 4 times to increase data efficiency. Hidden states of each chunk except at the beginning were re-computed after each reuse to sustain PPO’s "on-policy" property as much as possible. Action was repeated for 5 times in the environment whenever the policy was executed and only the observation after the last action repeat was sent to the policy. Each training process started with a curriculum-learning in the first 1.5e7 steps: Speed of script agents was multiplied with x, where x is uniformly random-sampled between max{0, (n− 1e7)/5e6} and min{1,max{0, (n−5e6)/5e6}} at the beginning of each episode, where n was the steps of training. After the curriculum learning, Speed was fixed to the standard. Each experiment was executed for 3 times with different random seeds. Adam optimizer was used to update network parameters. More optimization hyper-parameter settings are in Tab.7.
D ADDITIONAL EXPERIMENT RESULTS
D.1 Monster-Hunt
In Monster-Hunt, we set Cmax = 5 for sampling w. Fig. 13 illustrates the policies discovered by several selected w values, where different strategic modalities can be clearly observed: e.g., with w = [0, 5, 0], agents always avoid monsters and only eat apples. In Fig. 14, it’s worth noting that w = [5, 0, 2] could yield the best policy profile (i.e., two agents move together to hunt the monster.)
and doesn’t even require further fine-tuning with some seeds. But the performance of w = [5, 0, 2] is significantly unstable and it may converge to another NE (i.e., two agents move to a corner and wait for the monster.) with other seeds. So w = [5, 0, 5], which yields stable strong cooperation strategies with different seeds, will be chosen in RR phase when w = [5, 0, 2] performs poorly. We demonstrate the obtained rewards from different policies in Fig. 14, where the policies learned by RPG produces the highest rewards.
D.2 Agar.io
D.2.1 STANDARD SETTING
We sampled 4 differentw and they varied in different degrees of cooperation. We also did experiments using only baseline PG or PG with intrinsic reward generated by Random Network distillation (RND) to compare with RPG. RR lasted for 40M steps, but only the best reward parameter in RR (w = [1, 1]) was warmed up for 3M steps and fine-tuned for 17M steps later. PG and RND were also trained for 60M steps in order to compare with RPGfairly. In Fig. 15, we can see that PG and RND produced very low rewards because they all converged to non-cooperative policies. w = [1, 1] produced highest rewards after RR, and rewards boosted higher after fine-tuning.
D.2.2 AGGRESSIVE SETTING
We sampled 5 different w and their behavior were much more various. the other training settings were the same as standard setting. in Fig. 16, we should notice that simply sharing reward (w = [1, 1]) didn’t get very high reward because attacking each other also benefits each other, so 2 agents just learned to sacrifice, Again, Fig. 16 illustrates that rewards of RPG was far ahead the other policies while both PG and PG+RND failed to learn cooperative strategies.
We also listed all results of Standard and Aggressive setting in Tab. 8 for clearer comparison.
D.2.3 UNIVERSAL REWARD-CONDITIONED POLICY
We also tried to train a universal policy conditioned on w by randomly sampling different w at the beginning of each episode during training rather than fixing different w and training the policy later
on. But as Fig. 17 illustrates, the learning process was very unstable and model performed almost the same under different w due to the intrinsic disadvantage of an on-policy algorithm dealing with multi-tasks: the learning algorithm may pay more effort on w where higher rewards are easier to get but ignore the performance on other w, which made it very hard to get diverse behaviors.
D.3 LEARN ADAPTIVE POLICY
In this section, we add the opponents’ identity ψ in the input of the value network to stable the training process and boost the performance of the adaptive agent. ψ is a C-dimensional one-hot vector, where C denotes the number of opponents.
D.3.1 Iterative Stag-Hunt
In Iterative Stag-Hunt, we randomize the payoff matrix, which is a 4-dimensional vector, and set Cmax = 4 for sampling w. The parallel threads are 512 and the episode length is 10. Other training hyper-parameter settings are the same as Tab.6. Fig 18 describes different w = [a, b, c, d] (i.e.,
[4, 0, 0, 0], [0, 0, 0, 4], [0, 4, 4, 0], [4, 1, 4, 0]) yields different policy profiles. e.g., with w = [0, 0, 0, 4], both agents tend to eat the hare. The original game corresponds to w = [4, 3,−50, 1]. Tab. 9 reveals w = [4, 0, 0, 0] yields the highest reward and reaches the optimal NE without further fine-tuning.
Utilizing 4 different strategies obtained in the RR phase as opponents, we could train an adaptive policy which can make proper decisions according to opponent’s identity. Fig. 19 shows the adaption training curve, we can see that the policy yields adaptive actions stably after 5e4 episodes. At the evaluation stage, we introduce 4 hand-designed opponents to test the performance of the adaptive policy, including Stag opponent (i.e., always hunt the stag), Hare opponent (i.e., always eat the hare), Tit-for-Tat (TFT) opponent (i.e., always hunt the stag at the first step, and then take the action executed by the other agent in the last step), and Random opponent (i.e., randomly choose to hunt the stag or eat the hare at each step). Tab. 10 illustrates that the adaptive policy exploits all hand-designed strategies, including Tit-for-Tat opponent, which significantly differ from the trained opponents.
D.3.2 Monster-Hunt
We use the policy population Π2 trained by 4 w values (i.e., w = [5, 1,−5], w = [4, 2,−2],w = [0, 5, 0],w = [5, 0, 5]) in the RR phase as opponents for training the adaptive policy. In addition, we sample other 4 w values (i.e., w = [5, 0, 0], w = [−5, 5,−5],w = [−5, 0, 5],w = [5,−5, 5]) from Cmax = 5 to train new opponents for evaluation. Fig. 20 shows the adaption training curve of the
monster-hunt game, where the adaptive policy could take actions stably according to the opponent’s identity.
D.3.3 Agar.io
In Agar.io, we used 2 types of policies from RR: w = [1, 0] (i.e. cooperative) and w = [0, 1] (i.e. competitive) as opponents, and trained a adaptive policy facing each opponent with probability=50% in standard setting while only its value head could know the opponent’s type directly. Then we supposed the policy could cooperate or compete properly with corresponding opponent. As Fig. 21 illustrates, the adaptive policy learns to cooperate with cooperative partners while avoid being exploited by competitive partners and exploit both partners.
More details about training and evaluating process: Oracle pure-cooperative policies are learned against a competitive policy for 4e7 steps. So do oracle pure-competitive policies. And the adaptive policy is trained for 6e7 steps. the length of each episode is 350 steps (the half is 175 steps). When evaluating, The policy against the opponent was the adaptive policy in first 175 steps whatever we are testing adaptive or oracle policies. When we tested adaptive policies, the policy against the opponent would keep going for another 175 steps while the opponent would changed to another type and its hidden state would be emptied to zero. When we tested oracle policies, the policy against the opponent would turn to corresponding oracle policies and the opponent would also changed its type while their hidden states were both emptied. | 1. What is the focus of the paper regarding two-player games and Nash equilibria?
2. What are the strengths of the proposed algorithm, particularly in addressing the problem of finding desirable Nash equilibria?
3. Do you have any concerns about the algorithm's effectiveness in specific situations?
4. How does the reviewer assess the clarity and presentation of the paper's content?
5. Are there any suggestions for improving the experimental setup or graph representation? | Review | Review
This paper considers the problem of finding a nash equilibrium in two player games where each of the algorithm runs an RL algorithm. In this paper they ask the question -- which nash equilibria does the dynamics converge to in this two player game (where each player optimizes based on a policy gradient algorithm). They construct two player games with multiple nash equilibria; one is a favorable nash equilibria where both players get high rewards while the other is a less favorable nash equilibria where both player only get medium rewards. In such games they first show that in general simply running policy gradient on the natural reward function i.e., the observed payoff will not lead to the desirable nash equilibria. The goal of this paper is to ameliorate this by considering perturbations in the reward space. At a high level, the algorithm learns multiple policies on a class of games generated by sampling multiple reward functions from a family and training one policy per sampled reward function using PG. Then using an evaluation function, the best policy is picked by evaluating each of the learnt policies on the original game.
My comments on this paper are as follows. First, I think the question studied in this paper is well-motivated. In general, controlling for which nash equilibrium a two player (or N-player in general) dynamics should converge to is an important problem. This problem has been extensively considered in the game theory and online learning literature. So the importance naturally extends to the multi-agent RL world. The proposed algorithm here is reminiscent of follow-the-perturbed-leader, where perturbations in the reward space (as opposed to the policy space) leads to improved algorithms. The strengths of this paper are as follows.
The initial parts of the paper are well-written. They consider simple toy examples to show that PG can indeed lead to bad equilibria. Then using some stylized analysis, show that sampling from the reward space can indeed overcome this problem (in games where the family of reward functions for which PG converges to the desired nash equilibria is large, yet for the specific game at hand PG leads to a bad equilibria).
Extensive experimental evaluation. The paper considers a total of four test bed games and shows the benefit of using reward randomization in these settings. In the supplementary materials there is extensive simulated results on these games. In particular, they fix a particular "true" payoff matrix and then run their algorithm against a few baselines assuming this game.
The problem considered is important, not just in the artificial game setting but also in many practical applications which can be modeled as a game that ends up having multiple NE.
Specification of the hyper-parameters of the algorithm used. Also evaluating the algorithms using multiple seeds (which is an important criteria for RL algorithms).
Having said that, here are some of the weakness of this paper.
The considered algorithm "works" probably because the settings are easy (As defined by convergence to the correct NE by PG algorithms) for an average instance from the space of all instances. In particular, consider the following thought exercise. Let us sample a random payoff matrix from the space of payoffs and run PG on this game. What is the prob. that PG will get stuck at the bad NE? If the answer is that this prob. is high, then it is also likely that the proposed algorithm will not work (since this is essentially the idea being exploited in the algorithm). On the other hand, if the prob. is low then it shows that in general the considered game is easy and this algorithm only optimizes for the "outlier" scenarios. In general, that is the biggest weakness I see in this paper; it makes a pretty strong implicit structural assumption on the games and their corresponding payoff landscape. Am I missing something here?
I also think this paper could improve its presentation. First, the experimental setup is pretty unclear. What is the reward function in each of the games. I do not see a formal definition. Likewise, the algorithm description itself is pretty informal. It takes a bit of leap of faith in assuming that there exists a well-defined evaluation function (that is computable). It would greatly help the paper in readability if things are written in a formal manner. Moreover, for each of the games (possibly in supplementary) taking a detailed approach to the setup of the game, the reward function and the evaluation function would help in reproducibility and understanding of this paper. Finally, I also think some of the graphs could use better color-schemes or other differentiating factors apart from color. Some of the colors are pretty close to each other and in general harder to read.
My overall evaluation is based on the first point in my weakness section and considering the paper in totality. If the authors can sufficiently answer that question and show convincing experiments and/or explanation I am willing to change my score. |
ICLR | Title
Discovering Diverse Multi-Agent Strategic Behavior via Reward Randomization
Abstract
We propose a simple, general and effective technique, Reward Randomization for discovering diverse strategic policies in complex multi-agent games. Combining reward randomization and policy gradient, we derive a new algorithm, RewardRandomized Policy Gradient (RPG). RPG is able to discover multiple distinctive human-interpretable strategies in challenging temporal trust dilemmas, including grid-world games and a real-world game Agar.io, where multiple equilibria exist but standard multi-agent policy gradient algorithms always converge to a fixed one with a sub-optimal payoff for every player even using state-of-the-art exploration techniques. Furthermore, with the set of diverse strategies from RPG, we can (1) achieve higher payoffs by fine-tuning the best policy from the set; and (2) obtain an adaptive agent by using this set of strategies as its training opponents. The source code and example videos can be found in our website: https://sites.google. com/view/staghuntrpg.
N/A
We propose a simple, general and effective technique, Reward Randomization for discovering diverse strategic policies in complex multi-agent games. Combining reward randomization and policy gradient, we derive a new algorithm, RewardRandomized Policy Gradient (RPG). RPG is able to discover multiple distinctive human-interpretable strategies in challenging temporal trust dilemmas, including grid-world games and a real-world game Agar.io, where multiple equilibria exist but standard multi-agent policy gradient algorithms always converge to a fixed one with a sub-optimal payoff for every player even using state-of-the-art exploration techniques. Furthermore, with the set of diverse strategies from RPG, we can (1) achieve higher payoffs by fine-tuning the best policy from the set; and (2) obtain an adaptive agent by using this set of strategies as its training opponents. The source code and example videos can be found in our website: https://sites.google. com/view/staghuntrpg.
1 INTRODUCTION
Games have been a long-standing benchmark for artificial intelligence, which prompts persistent technical advances towards our ultimate goal of building intelligent agents like humans, from Shannon’s initial interest in Chess (Shannon, 1950) and IBM DeepBlue (Campbell et al., 2002), to the most recent deep reinforcement learning breakthroughs in Go (Silver et al., 2017), Dota II (OpenAI et al., 2019) and Starcraft (Vinyals et al., 2019). Hence, analyzing and understanding the challenges in various games also become critical for developing new learning algorithms for even harder challenges.
Most recent successes in games are based on decentralized multi-agent learning (Brown, 1951; Singh et al., 2000; Lowe et al., 2017; Silver et al., 2018), where agents compete against each other and optimize their own rewards to gradually improve their strategies. In this framework, Nash Equilibrium (NE) (Nash, 1951), where no player could benefit from altering its strategy unilaterally, provides a general solution concept and serves as a goal for policy learning and has attracted increasingly significant interests from AI researchers (Heinrich & Silver, 2016; Lanctot et al., 2017; Foerster et al., 2018; Kamra et al., 2019; Han & Hu, 2019; Bai & Jin, 2020; Perolat et al., 2020): many existing works studied how to design practical multi-agent reinforcement learning (MARL) algorithms that can provably converge to an NE in Markov games, particularly in the zero-sum setting.
Despite the empirical success of these algorithms, a fundamental question remains largely unstudied in the field: even if an MARL algorithm converges to an NE, which equilibrium will it converge to? The existence of multiple NEs is extremely common in many multi-agent games. Discovering as many NE strategies as possible is particularly important in practice not only because different NEs can produce drastically different payoffs but also because when facing unknown players who are trained to play an NE strategy, we can gain advantage by identifying which NE strategy the opponent is playing and choosing the most appropriate response. Unfortunately, in many games where multiple distinct NEs exist, the popular decentralized policy gradient algorithm (PG), which has led to great successes in numerous games including Dota II and Stacraft, always converge to a particular NE with non-optimal payoffs and fail to explore more diverse modes in the strategy space.
Consider an extremely simple example, a 2-by-2 matrix game Stag-Hunt (Rousseau, 1984; Skyrms, 2004), where two pure strategy NEs exist: a “risky” cooperative equilibrium with the highest payoff ∗Equal contribution. † Work done as an intern at Institute for Interdisciplinary Information Sciences (IIIS), Tsinghua University.
for both agents and a “safe” non-cooperative equilibrium with strictly lower payoffs. We show, from both theoretical and practical perspectives, that even in this simple matrix-form game, PG fails to discover the high-payoff “risky” NE with high probability. The intuition is that the neighborhood that makes policies converge to the “risky” NE can be substantially small comparing to the entire policy space. Therefore, an exponentially large number of exploration steps are needed to ensure PG discovers the desired mode. We propose a simple technique, Reward Randomization (RR),
which can help PG discover the “risky” cooperation strategy in the stag-hunt game with theoretical guarantees. The core idea of RR is to directly perturb the reward structure of the multi-agent game of interest, which is typically low-dimensional. RR directly alters the landscape of different strategy modes in the policy space and therefore makes it possible to easily discover novel behavior in the perturbed game
(Fig. 1). We call this new PG variant Reward-Randomized Policy Gradient (RPG).
To further illustrate the effectiveness of RPG, we introduce three Markov games – two gridworld games and a real-world online game Agar.io. All these games have multiple NEs including both “risky” cooperation strategies and “safe” non-cooperative strategies. We empirically show that even with state-of-the-art exploration techniques, PG fails to discover the “risky” cooperation strategies. In contrast, RPG discovers a surprisingly diverse set of human-interpretable strategies in all these games, including some non-trivial emergent behavior. Importantly, among this set are policies achieving much higher payoffs for each player compared to those found by PG. This “diversityseeking” property of RPG also makes it feasible to build adaptive policies: by re-training an RL agent against the diverse opponents discovered by RPG, the agent is able to dynamically alter its strategy between different modes, e.g., either cooperate or compete, w.r.t. its test-time opponent’s behavior.
We summarize our contributions as follow
• We studied a collection of challenging multi-agent games, where the popular multi-agent PG algorithm always converges to a sub-optimal equilibrium strategy with low payoffs.
• A novel reward-space exploration technique, reward randomization (RR), for discovering hard-to-find equilibrium with high payoffs. Both theoretical and empirical results show that reward randomization substantially outperforms classical policy/action-space exploration techniques in challenging trust dilemmas.
• We empirically show that RR discovers surprisingly diverse strategic behaviors in complex Markov games, which further provides a practical solution for building an adaptive agent.
• A new multi-agent environment Agar.io, which allows complex multi-agent strategic behavior. We released the environment to the community as a novel testbed for MARL research.
2 A MOTIVATING EXAMPLE: STAG HUNT
Stag Hare Stag a, a c, b Hare b, c d, d
Table 1: The stag-hunt game, a > b ≥ d > c.
We start by analyzing a simple problem: finding the NE with the optimal payoffs in the Stag Hunt game. This game was originally introduced in Rousseau’s work, “A discourse on inequality” (Rousseau, 1984): a group of hunters are tracking a big stag silently; now a hare shows up, each hunter should decide whether to keep tracking the stag or kill the hare immediately. This leads to the 2-by-2 matrix-form stag-hunt game in Tab. 1
with two actions for each agent, Stag (S) and Hare (H). There are two pure strategy NEs: the Stag NE, where both agents choose S and receive a high payoff a (e.g., a = 4), and the Hare NE, where both agents choose H and receive a lower payoff d (e.g., d = 1). The Stag NE is “risky” because if one agent defects, they still receives a decent reward b (e.g., b = 3) for eating the hare alone while the other agent with an S action may suffer from a big loss c for being hungry (e.g., c = −10). Formally, let A = {S,H} denote the action space, πi(θi) denote the policy for agent i (i ∈ {1, 2}) parameterized by θi, i.e., P [πi(θi) = S] = θi and P [πi(θi) = H] = 1− θi, and R(a1, a2; i) denote the payoff for agent i when agent 1 takes action a1 and agent 2 takes action a2. Each agent i optimizes its expected utility Ui(π1, π2) = Ea1∼π1,a2∼π2 [R(a1, a2; i)]. Using the standard policy gradient algorithm, a typical learning procedure is to repeatedly take the following two steps until
convergence1: (1) estimate gradient ∇i = ∇Ui(π1, π2) via self-play; (2) update the policies by θi ← θi + α∇i with learning rate α. Although PG is widely used in practice, the following theorem shows in certain scenarios, unfortunately, the probability that PG converges to the Stag NE is low.
Theorem 1. Suppose a− b = (d− c) for some 0 < < 1 and initialize θ1, θ2 ∼ Unif [0, 1]. Then the probability that PG discovers the high-payoff NE is upper bounded by 2 + 2
1+2 + 2 .
Theorem 1 shows when the risk is high (i.e., c is low), then the probability of finding the Stag NE via PG is very low. Note this theorem applies to random initialization, which is standard in RL.
Remark: One needs at least N = Ω ( 1 ) restarts to ensure a constant success probability.
Fig. 2 shows empirical studies: we select 4 value assignments, i.e., c ∈ {−5,−20,−50,−100} and a=4, b=3, d=1, and run a state-of-the-art PG method, proximal policy optimization (PPO) (Schulman et al., 2017), on these games. The Stag NE is rarely reached, and, as c becomes smaller, the probability of finding the Stag NE significantly decreases. Peysakhovich & Lerer (2018b) provided a theorem of similar flavor without analyzing the dynamics of the learning algorithm whereas we explicitly characterize the behavior of PG. They studied a prosocial reward-sharing scheme, which transforms the reward of both agents toR(a1, a2; 1)+R(a1, a2; 2). Reward sharing can be viewed as a special case of our method and, as shown in Sec. 5, it is insufficient for solving complex temporal games.
2.1 REWARD RANDOMIZATION IN THE MATRIX-FORM STAG-HUNT GAME
9 Thm. 1 suggests that the utility function R highly influences what strategy PG might learn. Taking one step further, even if a strategy is difficult to learn with a particular R, it might be easier in some other function R′. Hence, if we can define an appropriate spaceR over different utility functions and draw samples from R, we may possibly discover desired novel strategies by running PG on some sampled utility function R′ and evaluating the obtained policy profile on the original game with R. We call this procedure Reward Randomization (RR).
Concretely, in the stag-hunt game, R is parameterized by 4 variables (aR, bR, cR, dR). We can define a distribution over R4, draw a tuple R′ = (aR′ , bR′ , cR′ , dR′) from this distribution, and run PG on R′. Denote the original stag-hunt game where the Stag NE is hard to discover as R0. Reward randomization draws N perturbed tuples R1, . . . , RN , runs PG on each Ri, and evaluates each of the obtained strategies on R0. The theorem below shows it is highly likely that the population of the N policy profiles obtained from the perturbed games contains the Stag NE strategy.
Theorem 2. For any Stag-Hunt game, suppose in the i-th run of RR we randomly generate aRi , bRi , cRi , dRi ∼ Unif [−1, 1] and initialize θ1, θ2 ∼ Unif [0, 1], then with probability at least 1− 0.6N = 1− exp (−Ω (N)), the aforementioned RR procedure discovers the high-payoff NE.
Here we use the uniform distribution as an example. Other distributions may also help in practice. Comparing Thm. 2 and Thm. 1, RR significantly improves standard PG w.r.t. success probability.
Remark 1: For the scenario studied in Thm. 1, to achieve a (1− δ) success probability for some 0 < δ < 1, PG requires at least N = Ω ( 1 log ( 1 δ )) random restarts. For the same scenario, RR only requires to repeat at most N = O (log (1/δ)) which is independent of . When is small, this is a huge improvement.
Remark 2: Thm. 2 suggests that comparing with policy randomization, perturbing the payoff matrix makes it substantially easier to discover a strategy that can be hardly reached in the original game.
Note that although in Stag Hunt, we particularly focus on the Stag NE that has the highest payoff for both agents, in general RR can also be applied to NE selection in other matrix-form games using a payoff evaluation functionE(π1, π2). For example, we can setE(π1, π2) = U1(π1, π2)+U2(π1, π2) for a prosocial NE, or look for Pareto-optimal NEs by setting E(π1, π2) = βU1(π1, π2) + (1 − β)U2(π1, π2) with 0 ≤ β ≤ 1.
1In general matrix games beyond stag hunt, the procedure can be cyclic as well (Singh et al., 2000).
Algorithm 1: RPG: Reward-Randomized Policy Gradient Input: original game M , search spaceR, evaluation function E, population size N ; draw samples {R(1), . . . , R(N)} fromR; {π(i)1 , π (i) 2 } ← PG on induced games {M(R(i))}i in parallel ; // RR phase select the best candidate π(k)1 , π (k) 2 by k = arg maxiE(π (i) 1 , π (i) 2 ) ; // evaluation phase π?1 , π ? 2 ← fine-tune π (k) 1 , π (k) 2 on M via PG (if necessary) ; // fine-tuning phase return π?1 , π?2 ;
3 RPG: REWARD-RANDOMIZED POLICY GRADIENT
Herein, we extend Reward Randomization to general multi-agent Markov games. We now utilize RL terminologies and consider the 2-player setting for simplicity. Extension to more agents is straightforward (Appx. B.3).
Consider a 2-agent Markov game M defined by (S,O,A, R, P ), where S is the state space; O = {oi : s ∈ S, oi = O(s, i), i ∈ {1, 2}} is the observation space, where agent i receives its own observation oi = O(s; i) (in the fully observable setting, O(s, i) = s); A is the action space for each agent; R(s, a1, a2; i) is the reward function for agent i; and P (s′|s, a1, a2) is transition probability from state s to state s′ when agent i takes action ai. Each agent has a policy πi(oi; θi) which produces a (stochastic) action and is parameterized by θi. In the decentralized RL framework, each agent i optimizes its expected accumulative reward Ui(θi) = Ea1∼π1,a2∼π2 [ ∑ t γ tR(st, at1, a t 2; i)] with some discounted factor γ.
Consider we run decentralized RL on a particular a Markov game M and the derived policy profile is (π1(θ1), π2(θ2)). The desired result is that the expected reward Ui(θi) for each agent i is maximized. We formally written this equilibrium evaluation objective as an evaluation function E(π1, π2) and therefore the goal is to find the optimal policy profile (π?1 , π ? 2) w.r.t. E. Particularly for the games we considered in this paper, since every (approximate) equilibrium we ever discovered has a symmetric payoff, we focus on the empirical performance while assume a much simplified equilibrium selection problem here: it is equivalent to define E(π1, π2) by E(π1, π2) = βU1(θ1) + (1− β)U2(θ2) for any 0 ≤ β ≤ 1. Further discussions on the general equilibrium selection problem can be found in Sec. 6. The challenge is that although running decentralized PG is a popular learning approach for complex Markov games, the derived policy profile (π1, π2) is often sub-optimal, i.e., there exists (π?1 , π ? 2) such that E(π?1 , π ? 2) > E(π1, π2). It will be shown in Sec. 5 that even using state-of-the-art exploration techniques, the optimal policies (π?1 , π ? 2) can be hardly achieved.
Following the insights from Sec. 2, reward randomization can be applied to a Markov game M similarly: if the reward function in M poses difficulties for PG to discover some particular strategy, it might be easier to reach this desired strategy with a perturbed reward function. Hence, we can then define a reward function spaceR, train a population of policy profiles in parallel with sampled reward functions from R and select the desired strategy by evaluating the obtained policy profiles in the original game M . Formally, instead of purely learning in the original game M = (S,O,A, R, P ), we define a proper subspace R over possible reward functions R : S × A × A → R and use M(R′) = (S,O,A, R′, P ) to denote the induced Markov game by replacing the original reward functionRwith anotherR′ ∈ R. To apply reward randomization, we drawN samplesR(1), . . . , R(N) from R, run PG to learn (π(i)1 , π (i) 2 ) on each induced game M(R
(i)), and pick the desired policy profile (π(k)1 , π (k) 2 ) by calculating E in the original game M . Lastly, we can fine-tune the policies π (k) 1 , π (k) 2 in M to further boost the practical performance (see discussion below). We call this learning procedure, Reward-Randomized Policy Gradient (RPG), which is summarized in Algo. 1.
Reward-function space: In general, the possible space for a valid reward function is intractably huge. However, in practice, almost all the games designed by human have low-dimensional reward structures based on objects or events, so that we can (almost) always formulate the reward function in a linear form R(s, a1, a2; i) = φ(s, a1, a2; i)Tw where φ(s, a1, a2; i) is a low-dimensional feature vector and w is some weight.
A simple and general design principle for R is to fix the feature vector φ while only randomize the weight w, i.e., R = {Rw : Rw(s, a1, a2; i) = φ(s, a1, a2; i)Tw, ‖w‖∞ ≤ Cmax}. Hence, the overall search space remains a similar structure as the original game M but contains a diverse range of preferences over different feature dimensions. Notably, since the optimal strategy is invariant to the scale of the reward function R, theoretically any Cmax > 0 results in the same search space.
However, in practice, the scale of reward may significantly influence MARL training stability, so we typically ensure the chosen Cmax to be compatible with the PG algorithm in use.
Note that a feature-based reward function is a standard assumption in the literature of inverse RL (Ng et al., 2000; Ziebart et al., 2008; Hadfield-Menell et al., 2017). In addition, such a reward structure is also common in many popular RL application domains. For example, in navigation games (Mirowski et al., 2016; Lowe et al., 2017; Wu et al., 2018), the reward is typically set to the negative distance from the target location LT to the agent’s location LA plus a success bonus, so the feature vector φ(s, a) can be written as a 2-dimensional vector [‖LT − LA‖2, I(LT = LA)]; in real-time strategy games (Wu & Tian, 2016; Vinyals et al., 2017; OpenAI et al., 2019), φ is typically related to the bonus points for destroying each type of units; in robotics manipulation (Levine et al., 2016; Li et al., 2020; Yu et al., 2019), φ is often about the distance between the robot/object and its target position; in general multi-agent games (Lowe et al., 2017; Leibo et al., 2017; Baker et al., 2020), φ could contain each agent’s individual reward as well as the joint reward over each team, which also enables the representation of different prosociality levels for the agents by varying the weight w.
Fine tuning: There are two benefits: (1) the policies found in the perturbed game may not remain an equilibrium in the original game, so fine-tuning ensures convergence; (2) in practice, fine-tuning could further help escape a suboptimal mode via the noise in PG (Ge et al., 2015; Kleinberg et al., 2018). We remark that a practical issue for fine-tuning is that when the PG algorithm adopts the actor-critic framework (e.g., PPO), we need an additional critic warm-start phase, which only trains the value function while keeps the policy unchanged, before the fine-tuning phase starts. This warm-start phase significantly stabilizes policy learning by ensuring the value function is fully functional for variance reduction w.r.t. the reward function R in the original game M when estimating policy gradients.
3.1 LEARNING TO ADAPT WITH DIVERSE OPPONENTS
Algorithm 2: Learning to Adapt Input: game M , policy set Π2, initial πa1 ; repeat
draw a policy π′2 from Π2; evaluate πa1 and π′2 on M and collect data; update θa via PG if enough data collected;
until enough iterations; return πa1 (θa);
In addition to the final policies π?1 , π ? 2 , another benefit from RPG is that the population of N policy profiles contains diverse strategies (more in Sec. 5). With a diverse set of strategies, we can build an adaptive agent by training with a random opponent policy sampled from the set per episode, so that the agent is forced to behave differently based on its opponent’s behavior. For simplicity, we consider learning an adaptive policy πa1 (θ
a) for agent 1. The procedure remains the same for agent 2. Suppose a policy population P = {π(1)2 , . . . , π (N) 2 } is obtained during the RR phase, we first construct a diverse strategy set Π2 ⊆ P that contains all the discovered behaviors from P . Then we construct a mixed strategy by randomly sampling a policy π′2 from Π2 in every training episode and run PG to learn πa1 by competing against this constructed mixed strategy. The procedure is summarized in Algo. 2. Note that setting Π2 = P appears to be a simple and natural choice. However, in practice, since P typically contains just a few strategic behaviors, it is unnecessary for Π2 to include every individual policy from P . Instead, it is sufficient to simply ensure Π2 contains at least one policy from each equilibrium in P (more details in Sec. 5.3). Additionally, this method does not apply to the one-shot game setting (i.e., horizon is 1) because the adaptive agent does not have any prior knowledge about its opponent’s identity before the game starts.
Implementation: We train an RNN policy for πa1 (θa). It is critical that the policy input does not directly reveal the opponent’s identity, so that it is forced to identify the opponent strategy through what it has observed. On the contrary, when adopting an actor-critic PG framework (Lowe et al., 2017), it is extremely beneficial to include the identity information in the critic input, which makes critic learning substantially easier and significantly stabilizes training. We also utilize a multi-head architecture adapted from the multi-task learning literature (Yu et al., 2019), i.e., use a separate value head for each training opponent, which empirically results in the best training performance.
4 TESTBEDS FOR RPG: TEMPORAL TRUST DILEMMAS
We introduce three 2-player Markov games as testbeds for RPG. All these games have a diverse range of NE strategies including both “risky” cooperative NEs with high payoffs but hard to discover and “safe” non-cooperative NEs with lower payoffs. We call them temporal trust dilemmas. Game descriptions are in a high level to highlight the game dynamics. More details are in Sec. 5 and App. B.
Gridworlds: We consider two games adapted from Peysakhovich & Lerer (2018b), Monster-Hunt (Fig. 3) and Escalation (Fig. 4). Both games have a 5-by-5 grid and symmetric rewards.
Monster-Hunt contains a monster and two apples. Apples are static while the monster keeps moving towards its closest agent. If a single agent meets the monster, it loses a penalty of 2; if two agents catch the monster together, they both earn a bonus of 5. Eating an apple always raises a bonus of 2. Whenever an apple is eaten or the monster meets an agent, the entity will respawn randomly. The optimal payoff can only be achieved when both agents precisely catch the monster simultaneously.
Escalation contains a lit grid. When two agents both step on the lit grid, they both get a bonus of 1 and a neighboring grid will be lit up in the next timestep. If only one agent steps on the lit grid, it gets a penalty of 0.9L, where L denotes the consecutive cooperation steps until that timestep, and the lit grid will respawn randomly. Agents need to stay together on the lit grid to achieve the maximum payoff despite of the growing penalty. There are multiple NEs: for each L, that both agents cooperate for L steps and then leave the lit grid jointly forms an NE.
Agar.io is a popular multiplayer online game. Players control cells in a Petri dish to gain as much mass as possible by eating smaller cells while avoiding being eaten by larger ones. Larger cells move slower. Each player starts with one cell but can split a sufficiently large cell into two, allowing them to control multiple cells (Wikipedia, 2020). We consider a simplified scenario (Fig. 5) with 2 players (agents) and tiny script cells, which automatically runs away when an agent comes by. There is a low-risk non-cooperative strategy, i.e., two agents stay away from each other and hunt script cells independently. Since the script cells move faster, it is challenging for a single agent to hunt them. By contrast, two agents can cooperate to encircle the script cells to accelerate hunting. However, cooperation is extremely risky for the agent with less mass: two agents need to stay close to cooperate but the larger agent may defect by eating the smaller one and gaining an immediate big bonus.
5 EXPERIMENT RESULTS
In this section, we present empirical results showing that in all the introduced testbeds, including the real-world game Agar.io, RPG always discovers diverse strategic behaviors and achieves an equilibrium with substantially higher rewards than standard multi-agent PG methods. We use PPO (Schulman et al., 2017) for PG training. Training episodes for RPG are accumulated over all the perturbed games. Evaluation results are averaged over 100 episodes in gridworlds and 1000 episodes in Agar.io. We repeat all the experiments with 3 seeds and use X (Y ) to denote mean X with standard deviation Y in all tables. Since all our discovered (approximate) NEs are symmetric for both players, we simply take E(π1, π2) = U1(π1, π2) as our evaluation function and only measure the reward of agent 1 in all experiments for simplicity. More details can be found in appendix.
5.1 GRIDWORLD GAMES
Monster-Hunt: Each agent’s reward is determined by three features per timestep: (1) whether two agents catch the monster together; (2) whether the agent steps on an apple; (3) whether the agent meets the monster alone. Hence, we write φ(s, a1, a2; i) as a 3-dimensional 0/1 vector with one dimension for one feature. The original game corresponds to w = [5, 2,−2]. We set Cmax = 5 for sampling w. We compare RPG with a collection of baselines, including standard PG (PG), PG with shared reward (PG+SR), population-based training (PBT), which trains the same amount of parallel PG policies as RPG, as
well as popular exploration methods, i.e., count-based exploration (PG+CNT) (Tang et al., 2017) and MAVEN (Mahajan et al., 2019). We also consider an additional baseline, DIAYN (Eysenbach et al., 2019), which discovers diverse skills using a trajectory-based diversity reward. For a fair comparison, we use DIAYN to first pretrain diverse policies (conceptually similar to the RR phase), then evaluate the rewards for every pair of obtained policies to select the best policy pair (i.e., evaluation phase, shown with the dashed line in Fig. 6), and finally fine-tune the selected policies until convergence (i.e., fine-tuning phase). The results of RPG and the 6 baselines are summarized in Fig. 6, where RPG consistently discovers a strategy with a significantly higher payoff. Note that the strategy with the optimal payoff may not always directly emerge in the RR phase, and there is neither a particular value of w constantly being the best candidate: e.g., in the RR phase, w = [5, 0, 2] frequently produces a sub-optimal cooperative strategy (Fig. 7(a)) with a reward lower than other w values, but it can also occasionally lead to the optimal strategy (Fig. 7(b)). Whereas, with the fine-tuning phase, the overall procedure of RPG always produces the optimal solution. We visualize both two emergent cooperative strategies in Fig. 7: in the sub-optimal one (Fig. 7(a)), two agents simply move to grid (1,1) together, stay still and wait for the monster, while in the optimal one (Fig. 7(b)), two agents meet each other first and then actively move towards the monster jointly, which further improves hunting efficiency.
Escalation: We can represent φ(s, a1, a2; i) as 2-dimensional vector containing (1) whether two agents are both in the lit grid and (2) the total consecutive cooperation steps. The original game corresponds to w = [1,−0.9]. We set Cmax = 5 and show the total number of cooperation steps per episode for several selected w values throughout training in Fig. 8, where RR is able to discover different NE strategies. Note that w = [1, 0] has already produced the strategy with the optimal payoff in this game, so the fine-tuning phase is no longer needed.
5.2 2-PLAYER GAMES IN Agar.io
There are two different settings of Agar.io: (1) the standard setting, i.e., an agent gets a penalty of −x for losing a mass x, and (2) the more challenging aggressive setting, i.e., no penalty for mass loss. Note in both settings: (1) when an agent eats a mass x, it always gets a bonus of x; (2) if an agent loses all the mass, it immediately dies while the other agent can still play in the game. The aggressive setting promotes agent interactions and typically leads to more diverse strategies in practice. Since both settings strictly define the penalty function for mass loss, we do not randomize this reward term. Instead, we consider two other factors: (1) the bonus for eating the other agent; (2) the prosocial level of both agents. We use a 2-dimensional vector w = [w0, w1], where 0 ≤ w0, w1 ≤ 1, to denote a particular reward function such that (1) when eating a cell of mass x from the other agent, the bonus is w0 × x, and (2) the final reward is a linear interpolation between R(·; i) and 0.5(R(·; 0) +R(·; 1)) w.r.t. w1, i.e., when w1 = 0, each agent optimizes its individual reward while when w1 = 1, two agents have a shared reward. The original game in both Agar.io settings corresponds to w = [1, 0].
Standard setting: PG in the original game (w = [1, 0]) leads to a typical trust-dilemma dynamics: the two agents first learn to hunt and occasionally Cooperate (Fig. 9(a)), i.e., eat a script cell with the other agent close by; then accidentally one agent Attacks the other agent (Fig. 9(b)), which yields a big
PBT w=[0.5, 1] w=[0, 1] w=[0, 0] RPG RND
Rew. 3.3(0.2) 4.8(0.6) 5.1(0.4) 6.0(0.5) 8.9(0.3) 3.2(0.2) #Attack 0.4(0.0) 0.7(0.2) 0.3(0.1) 0.5(0.1) 0.9(0.1) 0.4(0.0) #Coop. 0.0(0.0) 0.6(0.6) 2.3(0.3) 1.6(0.1) 2.0(0.2) 0.0(0.0) #Hunt 0.7(0.1) 0.6(0.3) 0.3(0.0) 0.7(0.0) 0.9(0.1) 0.7(0.0)
Table 3: Results in the aggressive setting of Agar.io: PBT: population training of parallel PG policies; RR: w=[0, 0] is the best candidate via RR; RPG: fine-tuned policy; RND: PG with RND bonus.
immediate bonus and makes the policy aggressive; finally policies converge to the non-cooperative equilibrium where both agents keep apart and hunt alone. The quantitative results are shown in Tab. 2. Baselines include population-based training (PBT) and a state-the-art exploration method for high-dimensional state, Random Network Distillation (RND) (Burda et al., 2019). RND and PBT occasionally learns cooperative strategies while RR stably discovers a cooperative equilibrium with w = [1, 1], and the full RPG further improves the rewards. Interestingly, the best strategy obtained in the RR phase even has a higher Cooperate frequency than the full RPG: fine-tuning transforms the strong cooperative strategy to a more efficient strategy, which has a better balance between Cooperate and selfish Hunt and produces a higher average reward.
Aggressive setting: Similarly, we apply RPG in the aggressive setting and show results in Tab. 3. Neither PBT nor RND was able to find any cooperative strategies in the aggressive game while RPG stably discovers a cooperative equilibrium with a significantly higher reward. We also observe a diverse set of complex strategies in addition to normal Cooperate and Attack. Fig. 10 visualizes the Sacrifice strategy derived with w = [1, 1]: the smaller agent rarely hunts script cells; instead, it waits in the corner for being eaten by the larger agent to contribute all its mass to its partner. Fig. 11 shows another surprisingly novel emergent strategy by w = [0.5, 1]: each agent first hunts individually to gain enough mass; then one agent splits into smaller cells while the other agent carefully eats a portion of the split agent; later on, when the agent who previously lost mass gains sufficient mass, the larger agent similarly splits itself to contribute to the other one, which completes the (ideally) never-ending loop of partial sacrifice. We name this strategy Perpetual for its conceptual similarity to the perpetual motion machine. Lastly, the best strategy is produced by w = [0, 0] with a balance between Cooperate and Perpetual: they cooperate to hunt script cells to gain mass efficiently and quickly perform mutual sacrifice as long as their mass is sufficiently large for split-and-eat. Hence, although the RPG policy has relatively lower Cooperate frequency than the policy by w = [0, 1], it yields a significantly higher reward thanks to a much higher Attack (i.e., Sacrifice) frequency.
5.3 LEARNING ADAPTIVE POLICIES
Monster-Hunt: We select policies trained by 8 differentw values in the RR phase and use half of them for training the adaptive policy and the remaining half as hidden opponents for evaluation. We also make sure that both training and evaluation policies cover the following 4 strategy modes: (1) M(onster): the agent always moves towards the monster; (2) M(onster)-Alone: the agent moves towards the monster but
also tries to keeps apart from the other agent; (3) M(onster)-Coop.: the agent seeks to hunt the monster together with the other agent; (4) Apple: the agent only eats apple. The evaluation results are shown in Tab. 4, where the adaptive policy successfully exploits all the test-time opponents, including M(onster)-Alone, which was trained to actively avoids the other agent.
Agent Adapt. Coop. Comp. Opponent: Cooperative −→ Competitive #Attack 0.2(0.0) 0.3(0.0) 0.1(0.1)
Rew. 0.7(0.7) -0.2(0.6) 0.8(0.5) Opponent: Competitive −→ Cooperative #Coop. 1.0(0.3) 1.4(0.4) 0.3(0.4) Rew. 2.5(0.7) 3.6(1.2) 1.1(0.7)
Table 5: Adaptation test in Agar.io. Opponent type is switched half-way per episode. #Attack, #Coop.: episode statistics; Rew.: agent reward. Adaptive agents’ rewards are close to oracles.
Agar.io: We show the trained agent can choose to cooperate or compete adaptively in the standard setting. We pick 2 cooperative policies (i.e., Cooperate preferred, w=[1, 0]) and 2 competitive policies (i.e., Attack preferred, w=[1, 1]) and use half of them for training and the other half for testing. For a hard challenge at test time, we switch the opponent within an episode, i.e., we use a cooperative opponent in the first half and then immediately switch to a competitive one, and vice versa. So, a desired policy should adapt quickly at halftime. Tab. 5 compares the second-half behavior of the adaptive agent with the oracle pure-competitive/cooperative agents. The rewards of the adaptive agent is close to the oracle: even with half-way switches, the trained policy is
able to exploit the cooperative opponent while avoid being exploited by the competitive one.
6 RELATED WORK AND DISCUSSIONS
Our core idea is reward perturbation. In game theory, this is aligned with the quantal response equilibrium (McKelvey & Palfrey, 1995), a smoothed version of NE obtained when payoffs are perturbed by a Gumbel noise. In RL, reward shaping is popular for learning desired behavior in various domains (Ng et al., 1999; Babes et al., 2008; Devlin & Kudenko, 2011), which inspires our idea for finding diverse strategic behavior. By contrast, state-space exploration methods (Pathak et al., 2017; Burda et al., 2019; Eysenbach et al., 2019; Sharma et al., 2020) only learn low-level primitives without strategy-level diversity (Baker et al., 2020).
RR trains a set of policies, which is aligned with the population-based training in MARL (Jaderberg et al., 2017; 2019; Vinyals et al., 2019; Long et al., 2020; Forestier et al., 2017). RR is conceptually related to domain randomization (Tobin et al., 2017) with the difference that we train separate policies instead of a single universal one, which suffers from mode collapse (see appendix D.2.3). RPG is also inspired by the map-elite algorithm (Cully et al., 2015) from evolutionary learning community, which optimizes multiple objectives simultaneously for sufficiently diverse polices. Our work is also related to Forestier et al. (2017), which learns a set of policies w.r.t. different fitness functions in the singleagent setting. However, they only consider a restricted fitness function class, i.e., the distance to each object in the environment, which can be viewed as a special case of our setting. Besides, RPG helps train adaptive policies against a set of opponents, which is related to Bayesian games (Dekel et al., 2004; Hartline et al., 2015). In RL, there are works on learning when to cooperate/compete (Littman, 2001; Peysakhovich & Lerer, 2018a; Kleiman-Weiner et al., 2016; Woodward et al., 2019; McKee et al., 2020), which is a special case of ours, or learning robust policies (Li et al., 2019; Shen & How, 2019; Hu et al., 2020), which complements our method.
Although we choose decentralized PG in this paper, RR can be combined with any other multi-agent learning algorithms for games, such as fictitious play (Robinson, 1951; Monderer & Shapley, 1996; Heinrich & Silver, 2016; Kamra et al., 2019; Han & Hu, 2019), double-oracle (McMahan et al., 2003; Lanctot et al., 2017; Wang et al., 2019; Balduzzi et al., 2019) and regularized self-play (Foerster et al., 2018; Perolat et al., 2020; Bai & Jin, 2020). Many of these works have theoretical guarantees to find an (approximate) NE but there is little work focusing on which NE strategy these algorithms can converge to when multiple NEs exist, e.g., the stag-hunt game and its variants, for which many learning dynamics fail to converge to a prevalence of the pure strategy Stag (Kandori et al., 1993; Ellison, 1993; Fang et al., 2002; Skyrms & Pemantle, 2009; Golman & Page, 2010)..
In this paper, we primarily focus on how reward randomization empirically helps MARL discover better strategies in practice and therefore only consider stag hunt as a particularly challenging example where an “optimal” NE with a high payoff for every agent exists. In general cases, we can select a desired strategy w.r.t. an evaluation function. This is related to the problem of equilibrium refinement (or equilibrium selection) (Selten, 1965; 1975; Myerson, 1978), which aims to find a subset of equilibria satisfying desirable properties, e.g., admissibility (Banks & Sobel, 1987), subgame perfection (Selten, 1965), Pareto efficiency (Bernheim et al., 1987) or robustness against opponent’s deviation from best response in security-related applications (Fang et al., 2013; An et al., 2011).
ACKNOWLEDGMENTS
This work is supported by National Key R&D Program of China (2018YFB0105000). Co-author Fang is supported, in part, by a research grant from Lockheed Martin. Co-author Wang is supported, in part, by gifts from Qualcomm and TuSimple. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the funding agencies. The authors would like to thank Zhuo Jiang and Jiayu Chen for their support and input during this project. Finally, we particularly thank Bowen Baker for initial discussions and suggesting the Stag Hunt game as our research testbed, which eventually leads to this paper.
A PROOFS
Proof of Theorem 1. We apply self-play policy gradient to optimize θ1 and θ2. Here we consider a projected version, i.e., if at some time t, θ1 or θ2 6∈ [0, 1], we project it to [0, 1] to ensure it is a valid distribution.
We first compute the utility given a pair (θ1, θ2)
U1(θ1, θ2) =aθ1θ2 + cθ1(1− θ2) + b(1− θ1)θ2 + d(1− θ1)(1− θ2) U2(θ1, θ2) =aθ1θ2 + bθ1(1− θ2) + c(1− θ1)θ2 + d(1− θ1)(1− θ2).
We can compute the policy gradient
∇U1(θ1, θ2) =aθ2 + c(1− θ2)− bθ2 − d(1− θ2) = (a+ d− b− c)θ2 + c− d ∇U2(θ1, θ2) =aθ2 − bθ1 + c(1− θ1)− d(1− θ1) = (a+ d− b− c)θ1 + c− d
Recall in order to find the optimal solution both θ1 and θ2 need to increase. Also note that the initial θ1 and θ2 determines the final solution. In particular, only if θ1 and θ2 are increasing at the beginning, they will converge to the desired solution.
To make either θ1 or θ2 increase, we need to have
(a+ d− b− c)θ1 + c− d > 0 or (a+ d− b− c)θ2 + c− d > 0 (1)
Consider the scenario a− b = (d− c). In order to make Inequality equation 1 to hold, we need at least either θ1, θ2 ≥ 11+ .
If we initialize θ1 ∼ [0, 1] and θ2 ∼ [0, 1], the probability of either θ1, θ2 ≥ 11+ is 1− ( 1 1+ )2 =
2 + 2
1+2 + 2 = O ( ).
Proof of Theorem 2. Using a similar observation as in Theorem 1, we know a necessary condition to make PG converge to a sub-optimal NE is
(a+ d− b− c)θ1 + c− d < 0 or (a+ d− b− c)θ2 + c− d < 0.
Based on our generating scheme on a, b, c, d and the initialization scheme on θ1, θ2, we can verify that Therefore, via a union bound, we know
P ((a+ d− b− c)θ1 + c− d < 0 or (a+ d− b− c)θ2 + c− d < 0) ≤ 0.6. (2)
Since each round is independent, the probability that PG fails for all N times is upper bounded by 0.6N . Therefore, the success probability is lower bounded by 1− 0.6N = 1− exp (−Ω (N)).
B ENVIRONMENT DETAILS
B.1 Iterative Stag-Hunt
In Iterative Stag-Hunt, two agents play 10 rounds, that is, both PPO’s trajectory length and episode length are 10. Action of each agent is a 1-dimensional vector, ai = {ti, i ∈ {0, 1}}, where ti = 0 denotes taking Stag action and ti = 1 denotes taking Hare action. Observation of each agent is actions taking by itself and its opponent in the last round, i.e., ori = {a r−1 i , a r−1 1−i ; i ∈ {0, 1}}, where r denotes the playing round. Note that neither agent has taken action at the first round, so the observation oi = {−1,−1}.
B.2 Monster-Hunt
In Monster-Hunt, two agents can move one step in any of the four cardinal directions (Up, Down, Left, Right) at each timestep. Let ai = {ti, i ∈ {0, 1}} denote action of agent i, where ti is a discrete 4-dimensional one-hot vector. The position of each agent can not exceed the border of 5-by-5 grid, where action execution is invalid. One Monster and two apples respawn in the different grids at the initialization. If an agent eats (move over in the grid world) an apple, it can gain 2 points. Sometimes, two agents may try to eat the same apple, the points will be randomly assigned to only one agent. Catching the monster alone causes an agent lose 2 points, but if two agents catch the stag simultaneously, each agent can gain 5 points. At each time step, the monster and apples will respawn randomly elsewhere in the grid world if they are wiped. In addition, the monster chases the agent closest to it at each timestep. The monster may move over the apple during the chase, in this case, the agent will gain the sum of points if it catches the monster and the apple exactly. Each agent’s observation oi is a 10-dimensional vector and formed by concatenating its own position pi, the other agent’s position p1−i, monster’s positionpmonster and sorted apples’ position papple0, papple1, i.e., oi = {pi, p1−i, pmonster, papple0, papple1; i ∈ {0, 1}}, where p = (u, v) denotes the 2-dimensional coordinates in the gridworld.
B.3 Monster-Hunt WITH MORE THAN 2 AGENTS
Here we consider extending RPG to the general setting of N agents. In most of the multi-agent games, the reward function are fully symmetric for the same type of agents. Hence, as long as we can formulate the reward function in a linear form over a feature vector and a shared weight, i.e., R(s, a1, . . . , aN ; i) = φ(s, a1, . . . , aN ; i)
Tw, we can directly apply RPG without any modification by setting R = {Rw : Rw(s, a1, . . . , aN ; i) = φ(s, a1, . . . , aN ; i)Tw}. Note that typically the dimension of the feature vector φ(·) remains fixed w.r.t. different number of agents (N ). For example, in the Agar.io game, no matter how many players are there in the game, the rule of how to get reward bonus and penalties remains the same.
Here, we experiment RPG in Monster-Hunt with 3 agents. The results are shown in Fig. 12. We consider baselines including the standard PG (PG) and population-based training (PBT). RPG reliably discovers a strong cooperation strategy with a substantially higher reward than the baselines.
B.4 Escalation
In Escalation, two agents appear randomly and one grid lights up at the initialization. If two agents step on the lit grid simultaneously, each agent can gain 1 point, and the lit grid will go out with an adjacent grid lighting up. Both agents can gain 1 point again if they step on the next lit grid together. But if one agent steps off the path, the other agent will lose 0.9L points, where L is the current length of stepping together, and the game is over. Another option is that two agents choose to step off the path simultaneously, neither agent will be punished, and the game continues. As the length L of stepping together increases, the cost of betrayal increases linearly. ai = {ti, i ∈ {0, 1}} denotes
action of agent i, where ti is a discrete 4-dimensional one-hot vector. The observation ai of agent i is composed of its own position pi, the other agent’s positionp1−i and the lit grid’s position plit, i.e., oi = {pi, p1−i, plit; i ∈ {0, 1}}, where p = (u, v) denotes the 2-dimensional coordinates in the gridworld. Moreover, we utilize GRU to encode the length L implicitly, instead of observing that explicitly.
B.5 Agar.io
In the original online game Agar.io, multiple players are limited in a circle petri dish. Each player controls one or more balls using only a cursor and 2 keyboard keys "space" and "w". all balls belonging to the player will move forward to where the cursor pointing at. Balls larger than a threshold will split to 2 smaller balls and rush ahead when the player pressing the key "space". Balls larger than another threshold will emit tiny motionless food-like balls when the player pressing "w". Agar.io has many play modes like "Free-For-All" mode (All players fight for their own and can eat each other) and "Team" mode (Players are separated to two groups. They should cooperate with other players in the same group and eat other players belonging to another group).
We simplified settings of the original game Agar.io: Now agents don’t need to emit tiny motionless balls and all fight with each other (FFA mode). The action space of the game is target × {split, no_split}. target ∈ [0, 1]2 means the target position that all balls belonging to the agent move to. binary action split or no_split means whether the player chooses to split, which will cause all balls larger than a threshold split to 2 smaller ones and rush ahead for a short while. These split balls will re-merge after some time, then the agent can split again. When one agent’s ball meets another agent’s ball and the former one is at least 1.2 times larger than the later, the later will be eaten and the former will get all its mass. The reward is defined as the increment of balls’ mass. So every agent’s goal is getting larger by eating others while avoid being eaten. But larger ball moves slower. So it’s really hard to catch smaller balls only by chasing after it. Split will help, but it needs high accuracy to rush to the proper direction. In our experiments, there were 7 agents interacting with each other. 2 agents were learned by our algorithm and would quit the game if all balls were eaten. 5 agents were controlled by a script and would reborn at a random place if all balls were eaten. Learn-based agents were initialized larger than script-based agents so it was basically one-way catching. In this setting, cooperation was the most efficient behavior for learn-based agents to gain positive reward, where they coordinated to surround script-based agents and caught them.
Observation space: We denote partial observation of agent i as oi, which includes global information of the agent (denoted as oi,global) and descriptions of all balls around the agent (including balls owned by the agent, denoted as oi,balls. and oi,balls = {oi,ball,1, oi,ball,2, ..., oi,ball,m}, where oi,ball,j denotes the j-th ball around the agent and there are m observed balls in all). oi,global = {li,obs, wi,obs, pi,center, vi, si,alive, ni,own, ni,script, ni,other, ai,last, ri,max, ri,min,mi} where li,obs, wi,obs (they are both 1D filled with a real number, from here the form like (1D, real) will be used as the abbreviation) are the length and width of the agent’s observation scope, pi,center (2D, real) is its center position, vi (2D, real) is the speed of its center, si,alive(1D, binary) is whether the other learn-based agent is killed, ni,own, ni,script, ni,other(1D, real) are numbers of each type of balls nearby (3 types: belonging to me, or belonging to a script agent, or belonging to another learn-based agent), ai,last(3D, real) is the agent’s last action, ri,max, ri,min(1D, real) are maximal and minimal radius of all balls belonging to the agent. for any j = 1, 2, ...,m, oi,ball,j = {pi,j,relative, pi,j,absolute, vi,j , vi,j,rush, ri,j , log(ri,j), di,j , ei,j,max, ei,j,min, si,j,rem, ti,j}, where pi,j,relative, pi,j,absolute(2D, real) are the ball’s relative and absolute position, vi,j is its speed, vi,j,rush is the ball’s additional rushing speed(when a ball splits to 2 smaller balls, these 2 balls will get additional speed and it’s called vi,j,rush, otherwise vi,j,rush = 0), ri,j(1D, real) is its radius, di,j is the distance between the ball and the center of the agent, ei,j,max, ei,j,min(1D, binary) are whether the ball can be eaten by the maximal or minimal balls of the observing agent, si,j,rem(1D, binary) is whether the ball is able to remerge at present. ti,j(3D, one hot) is the type of the ball.
The script-base agent can automatically chase after and split towards other smaller agents. When facing extreme danger (we define "extreme danger" as larger learn-based agents being very close to it), it will use a 3-step deep-first-search to plan a best way for escape. More details of the script can be seen in our code. We played against the script-base agent using human intelligence for many times and we could never hunt it when having only one ball and rarely catch it by split.
C TRAINING DETAILS
C.1 GRIDWORLD GAMES
In Monster-Hunt and Escalation, agents’ networks are organized by actor-critic (policy-value) architecture. We consider N = 2 agents with a policy profile π = {π0, π1} parameterized by θ = {θ0, θ1}. The policy network πi takes observation oi as input, two hidden layers with 64 units are followed after that, and then outputs action ai. While the value network takes as input observations of two agents, o = {o0, o1} and outputs the V-value of agent i, similarly two hidden layers with 64 units are added before the output.
In Escalation, we also place an additional GRU module before the output in policy network and value network respectively, to infer opponent’s intentions from historical information. Note that 64-dimensional hidden state of GRU h will change if the policy network is updated. In order to both keep forward information and use backward information to compute generalized advantage estimate (GAE) with enough trajectories, we split buffer data into small chunks, e.g., 10 consecutive timesteps as a small data chunk. The initial hidden state hinit, which is the first hidden state h0, is kept for each data chunk, but do another forward pass to re-compute {h1, ..., hM−1}, where M represents the length of one data chunk, and keep buffer-reuse low, e.g., 4 in practice.
Agents in Monster-Hunt and Escalation are trained by PPO with independent parameters. Adam optimizer is used to update network parameters and each experiment is executed for 3 times with random seeds. More optimization hyper-parameter settings are in Tab.6. In addition, Monster-Hunt also utilizes GRU modules to infer opponent’s identity during adaption training and the parallel threads are set to 64.
Count-based exploration: We just add the count-based exploration intrinsic reward rint to the environment reward during training. when the agent’s observation is o, rint = α/no where α is a hyperparameter adjusted properly (0.3 in Monster-Hunt and 1 in Escalation) and no is the number of times the agent have the observation o.
DIAYN: In Monster-Hunt, we use DIAYN to train 10 diverse policy in the first 140k episodes (DIAYN’s discriminator has 3 FC layers with 256, 128, 10 units respectively) and choose the policy which has the best performance in Monster-Hunt’s reward settings to fine-tune in the next 280k episodes. Note that DIAYN doesn’t have a warm-start phase before fine-tuning in its original paper so we didn’t do so as well. Note that in the first unsupervised learning phase, DIAYN does not optimize for any specific reward function. Hence, we did not plot the reward curve for DIAYN in Fig.7 for this phase. Instead, we simply put a dashed line showing the reward of the best selected pair of policies from DIAYN pretraining.
MAVEN: We use the open-sourced implementation of MAVEN from https://github.com/ AnujMahajanOxf/MAVEN.
Population-based training: In each PBT trial, we straightforward train the same amount of parallel PG policies as RPG with different random seeds in each problem respectively and choose the one with best performance as the final policy. Note that the final training curve is averaged over 3 PBT trials.
C.2 Agar.io
In Agar.io, we used PPO as our algorithm and agents’ networks were also organized by actor-critic (policy-value) architecture with a GRU unit (i.e., PPO-GRU). We consider N = 2 agents with a policy profile π = {π0, π1} sharing parameter θ. The policy network πi takes observation oi as input. At the beginning, like (Baker et al., 2019), oi,balls is separated to 3 groups according to balls’ types: oi,ownballs, oi,scriptballs and oi,otherballs. 3 different multi-head attention models with 4 heads and 64 units for transformation of keys, inquiries and values are used to embed information of 3 types of balls respectively, taking corresponding part of oi,balls as values and inquiries and oi,global as keys. Then their outputs are concatenated and transformed by an FC layer with 128 units before being sent to a GRU block with 128 units. After that, the hidden state is copied to 2 heads for policy’s and value’s output. The policy head starts with 2 FC layers both with 128 units and ends with 2 heads to generate discrete(split or no_split) and continuous(target) actions. The value head has 3 FC layers with 128, 128, 1 unit respectively and outputs a real number.
PPO-GRU was trained with 128 parallel environment threads. Agar.io’s episode length was uniformrandomly sampled between 300 and 400 both when training and evaluating. Buffer data were split to small chunks with length = 32 in order to diversify training data and stabilize training process. and the buffer was reused for 4 times to increase data efficiency. Hidden states of each chunk except at the beginning were re-computed after each reuse to sustain PPO’s "on-policy" property as much as possible. Action was repeated for 5 times in the environment whenever the policy was executed and only the observation after the last action repeat was sent to the policy. Each training process started with a curriculum-learning in the first 1.5e7 steps: Speed of script agents was multiplied with x, where x is uniformly random-sampled between max{0, (n− 1e7)/5e6} and min{1,max{0, (n−5e6)/5e6}} at the beginning of each episode, where n was the steps of training. After the curriculum learning, Speed was fixed to the standard. Each experiment was executed for 3 times with different random seeds. Adam optimizer was used to update network parameters. More optimization hyper-parameter settings are in Tab.7.
D ADDITIONAL EXPERIMENT RESULTS
D.1 Monster-Hunt
In Monster-Hunt, we set Cmax = 5 for sampling w. Fig. 13 illustrates the policies discovered by several selected w values, where different strategic modalities can be clearly observed: e.g., with w = [0, 5, 0], agents always avoid monsters and only eat apples. In Fig. 14, it’s worth noting that w = [5, 0, 2] could yield the best policy profile (i.e., two agents move together to hunt the monster.)
and doesn’t even require further fine-tuning with some seeds. But the performance of w = [5, 0, 2] is significantly unstable and it may converge to another NE (i.e., two agents move to a corner and wait for the monster.) with other seeds. So w = [5, 0, 5], which yields stable strong cooperation strategies with different seeds, will be chosen in RR phase when w = [5, 0, 2] performs poorly. We demonstrate the obtained rewards from different policies in Fig. 14, where the policies learned by RPG produces the highest rewards.
D.2 Agar.io
D.2.1 STANDARD SETTING
We sampled 4 differentw and they varied in different degrees of cooperation. We also did experiments using only baseline PG or PG with intrinsic reward generated by Random Network distillation (RND) to compare with RPG. RR lasted for 40M steps, but only the best reward parameter in RR (w = [1, 1]) was warmed up for 3M steps and fine-tuned for 17M steps later. PG and RND were also trained for 60M steps in order to compare with RPGfairly. In Fig. 15, we can see that PG and RND produced very low rewards because they all converged to non-cooperative policies. w = [1, 1] produced highest rewards after RR, and rewards boosted higher after fine-tuning.
D.2.2 AGGRESSIVE SETTING
We sampled 5 different w and their behavior were much more various. the other training settings were the same as standard setting. in Fig. 16, we should notice that simply sharing reward (w = [1, 1]) didn’t get very high reward because attacking each other also benefits each other, so 2 agents just learned to sacrifice, Again, Fig. 16 illustrates that rewards of RPG was far ahead the other policies while both PG and PG+RND failed to learn cooperative strategies.
We also listed all results of Standard and Aggressive setting in Tab. 8 for clearer comparison.
D.2.3 UNIVERSAL REWARD-CONDITIONED POLICY
We also tried to train a universal policy conditioned on w by randomly sampling different w at the beginning of each episode during training rather than fixing different w and training the policy later
on. But as Fig. 17 illustrates, the learning process was very unstable and model performed almost the same under different w due to the intrinsic disadvantage of an on-policy algorithm dealing with multi-tasks: the learning algorithm may pay more effort on w where higher rewards are easier to get but ignore the performance on other w, which made it very hard to get diverse behaviors.
D.3 LEARN ADAPTIVE POLICY
In this section, we add the opponents’ identity ψ in the input of the value network to stable the training process and boost the performance of the adaptive agent. ψ is a C-dimensional one-hot vector, where C denotes the number of opponents.
D.3.1 Iterative Stag-Hunt
In Iterative Stag-Hunt, we randomize the payoff matrix, which is a 4-dimensional vector, and set Cmax = 4 for sampling w. The parallel threads are 512 and the episode length is 10. Other training hyper-parameter settings are the same as Tab.6. Fig 18 describes different w = [a, b, c, d] (i.e.,
[4, 0, 0, 0], [0, 0, 0, 4], [0, 4, 4, 0], [4, 1, 4, 0]) yields different policy profiles. e.g., with w = [0, 0, 0, 4], both agents tend to eat the hare. The original game corresponds to w = [4, 3,−50, 1]. Tab. 9 reveals w = [4, 0, 0, 0] yields the highest reward and reaches the optimal NE without further fine-tuning.
Utilizing 4 different strategies obtained in the RR phase as opponents, we could train an adaptive policy which can make proper decisions according to opponent’s identity. Fig. 19 shows the adaption training curve, we can see that the policy yields adaptive actions stably after 5e4 episodes. At the evaluation stage, we introduce 4 hand-designed opponents to test the performance of the adaptive policy, including Stag opponent (i.e., always hunt the stag), Hare opponent (i.e., always eat the hare), Tit-for-Tat (TFT) opponent (i.e., always hunt the stag at the first step, and then take the action executed by the other agent in the last step), and Random opponent (i.e., randomly choose to hunt the stag or eat the hare at each step). Tab. 10 illustrates that the adaptive policy exploits all hand-designed strategies, including Tit-for-Tat opponent, which significantly differ from the trained opponents.
D.3.2 Monster-Hunt
We use the policy population Π2 trained by 4 w values (i.e., w = [5, 1,−5], w = [4, 2,−2],w = [0, 5, 0],w = [5, 0, 5]) in the RR phase as opponents for training the adaptive policy. In addition, we sample other 4 w values (i.e., w = [5, 0, 0], w = [−5, 5,−5],w = [−5, 0, 5],w = [5,−5, 5]) from Cmax = 5 to train new opponents for evaluation. Fig. 20 shows the adaption training curve of the
monster-hunt game, where the adaptive policy could take actions stably according to the opponent’s identity.
D.3.3 Agar.io
In Agar.io, we used 2 types of policies from RR: w = [1, 0] (i.e. cooperative) and w = [0, 1] (i.e. competitive) as opponents, and trained a adaptive policy facing each opponent with probability=50% in standard setting while only its value head could know the opponent’s type directly. Then we supposed the policy could cooperate or compete properly with corresponding opponent. As Fig. 21 illustrates, the adaptive policy learns to cooperate with cooperative partners while avoid being exploited by competitive partners and exploit both partners.
More details about training and evaluating process: Oracle pure-cooperative policies are learned against a competitive policy for 4e7 steps. So do oracle pure-competitive policies. And the adaptive policy is trained for 6e7 steps. the length of each episode is 350 steps (the half is 175 steps). When evaluating, The policy against the opponent was the adaptive policy in first 175 steps whatever we are testing adaptive or oracle policies. When we tested adaptive policies, the policy against the opponent would keep going for another 175 steps while the opponent would changed to another type and its hidden state would be emptied to zero. When we tested oracle policies, the policy against the opponent would turn to corresponding oracle policies and the opponent would also changed its type while their hidden states were both emptied. | 1. What is the main contribution of the paper regarding strategic games?
2. What are the strengths and weaknesses of the proposed mechanism, reward randomization?
3. How does the reviewer assess the paper's relationship with the literature in the area?
4. What are the limitations of the paper's approach compared to traditional multiagent reinforcement learning?
5. Are there any concerns regarding the paper's focus on specific games and applications? | Review | Review
The paper focuses on an important problem, the existence of multiple Nash equilibria in strategic games, and proposes a relatively simple mechanism, reward randomization, which allows agents to discover higher-payoff equilibria in games where normal decentralized policy gradient methods would converge to suboptimal ones. What is proposed is to perturb actual rewards with random perturbations, which essentially, to my mind simply creates a range of games agents find themselves in so that the chances are increased that policy search will end up finding more efficient equilibria.
While the overall idea is interesting, the paper actually lacks a precise problem definition that would also link it to the (huge) literature in the area, which it does refer to, but actually not directly compare to, as the now common deep learning route to solving the fundamental (and notoriously hard) problem is applied, which operates in a completely different setting - training function approximators using an enormous number of games played offline. This cannot be compared to the problems considered in game theory and traditional multiagent reinforcement learning, where policies are learned online from a relatively small number of examples, and an algorithm needs to be able to perform well against very broad classes of opponents, without the benefit of training itself on modified problems pre-play.
That said, I appreciate there is value to the theoretical results that provide some more general evidence for the potential importance of the method.
The paper makes a lot of assumptions about structure in larger games in order to apply feature-based learning approaches, but one could have approached the whole problem with a simple game and then simply demonstrate how it scales up - it seems like there is not that much to say about the conceptual ideas when one reads all the details.
An excessive part of the paper is spent on explaining ideas in the stag hunt game and describing the other games, with a lot of further detail included in the lengthy supplementary material, but much of this seems to detail the very specific process of applying the technique and competing alternatives in a few specific games - I don't think much of this adds to our understanding of the problem.
Beyond these criticisms, the paper is generally clearly written, and appears technically sound. |
ICLR | Title
Discovering Diverse Multi-Agent Strategic Behavior via Reward Randomization
Abstract
We propose a simple, general and effective technique, Reward Randomization for discovering diverse strategic policies in complex multi-agent games. Combining reward randomization and policy gradient, we derive a new algorithm, RewardRandomized Policy Gradient (RPG). RPG is able to discover multiple distinctive human-interpretable strategies in challenging temporal trust dilemmas, including grid-world games and a real-world game Agar.io, where multiple equilibria exist but standard multi-agent policy gradient algorithms always converge to a fixed one with a sub-optimal payoff for every player even using state-of-the-art exploration techniques. Furthermore, with the set of diverse strategies from RPG, we can (1) achieve higher payoffs by fine-tuning the best policy from the set; and (2) obtain an adaptive agent by using this set of strategies as its training opponents. The source code and example videos can be found in our website: https://sites.google. com/view/staghuntrpg.
N/A
We propose a simple, general and effective technique, Reward Randomization for discovering diverse strategic policies in complex multi-agent games. Combining reward randomization and policy gradient, we derive a new algorithm, RewardRandomized Policy Gradient (RPG). RPG is able to discover multiple distinctive human-interpretable strategies in challenging temporal trust dilemmas, including grid-world games and a real-world game Agar.io, where multiple equilibria exist but standard multi-agent policy gradient algorithms always converge to a fixed one with a sub-optimal payoff for every player even using state-of-the-art exploration techniques. Furthermore, with the set of diverse strategies from RPG, we can (1) achieve higher payoffs by fine-tuning the best policy from the set; and (2) obtain an adaptive agent by using this set of strategies as its training opponents. The source code and example videos can be found in our website: https://sites.google. com/view/staghuntrpg.
1 INTRODUCTION
Games have been a long-standing benchmark for artificial intelligence, which prompts persistent technical advances towards our ultimate goal of building intelligent agents like humans, from Shannon’s initial interest in Chess (Shannon, 1950) and IBM DeepBlue (Campbell et al., 2002), to the most recent deep reinforcement learning breakthroughs in Go (Silver et al., 2017), Dota II (OpenAI et al., 2019) and Starcraft (Vinyals et al., 2019). Hence, analyzing and understanding the challenges in various games also become critical for developing new learning algorithms for even harder challenges.
Most recent successes in games are based on decentralized multi-agent learning (Brown, 1951; Singh et al., 2000; Lowe et al., 2017; Silver et al., 2018), where agents compete against each other and optimize their own rewards to gradually improve their strategies. In this framework, Nash Equilibrium (NE) (Nash, 1951), where no player could benefit from altering its strategy unilaterally, provides a general solution concept and serves as a goal for policy learning and has attracted increasingly significant interests from AI researchers (Heinrich & Silver, 2016; Lanctot et al., 2017; Foerster et al., 2018; Kamra et al., 2019; Han & Hu, 2019; Bai & Jin, 2020; Perolat et al., 2020): many existing works studied how to design practical multi-agent reinforcement learning (MARL) algorithms that can provably converge to an NE in Markov games, particularly in the zero-sum setting.
Despite the empirical success of these algorithms, a fundamental question remains largely unstudied in the field: even if an MARL algorithm converges to an NE, which equilibrium will it converge to? The existence of multiple NEs is extremely common in many multi-agent games. Discovering as many NE strategies as possible is particularly important in practice not only because different NEs can produce drastically different payoffs but also because when facing unknown players who are trained to play an NE strategy, we can gain advantage by identifying which NE strategy the opponent is playing and choosing the most appropriate response. Unfortunately, in many games where multiple distinct NEs exist, the popular decentralized policy gradient algorithm (PG), which has led to great successes in numerous games including Dota II and Stacraft, always converge to a particular NE with non-optimal payoffs and fail to explore more diverse modes in the strategy space.
Consider an extremely simple example, a 2-by-2 matrix game Stag-Hunt (Rousseau, 1984; Skyrms, 2004), where two pure strategy NEs exist: a “risky” cooperative equilibrium with the highest payoff ∗Equal contribution. † Work done as an intern at Institute for Interdisciplinary Information Sciences (IIIS), Tsinghua University.
for both agents and a “safe” non-cooperative equilibrium with strictly lower payoffs. We show, from both theoretical and practical perspectives, that even in this simple matrix-form game, PG fails to discover the high-payoff “risky” NE with high probability. The intuition is that the neighborhood that makes policies converge to the “risky” NE can be substantially small comparing to the entire policy space. Therefore, an exponentially large number of exploration steps are needed to ensure PG discovers the desired mode. We propose a simple technique, Reward Randomization (RR),
which can help PG discover the “risky” cooperation strategy in the stag-hunt game with theoretical guarantees. The core idea of RR is to directly perturb the reward structure of the multi-agent game of interest, which is typically low-dimensional. RR directly alters the landscape of different strategy modes in the policy space and therefore makes it possible to easily discover novel behavior in the perturbed game
(Fig. 1). We call this new PG variant Reward-Randomized Policy Gradient (RPG).
To further illustrate the effectiveness of RPG, we introduce three Markov games – two gridworld games and a real-world online game Agar.io. All these games have multiple NEs including both “risky” cooperation strategies and “safe” non-cooperative strategies. We empirically show that even with state-of-the-art exploration techniques, PG fails to discover the “risky” cooperation strategies. In contrast, RPG discovers a surprisingly diverse set of human-interpretable strategies in all these games, including some non-trivial emergent behavior. Importantly, among this set are policies achieving much higher payoffs for each player compared to those found by PG. This “diversityseeking” property of RPG also makes it feasible to build adaptive policies: by re-training an RL agent against the diverse opponents discovered by RPG, the agent is able to dynamically alter its strategy between different modes, e.g., either cooperate or compete, w.r.t. its test-time opponent’s behavior.
We summarize our contributions as follow
• We studied a collection of challenging multi-agent games, where the popular multi-agent PG algorithm always converges to a sub-optimal equilibrium strategy with low payoffs.
• A novel reward-space exploration technique, reward randomization (RR), for discovering hard-to-find equilibrium with high payoffs. Both theoretical and empirical results show that reward randomization substantially outperforms classical policy/action-space exploration techniques in challenging trust dilemmas.
• We empirically show that RR discovers surprisingly diverse strategic behaviors in complex Markov games, which further provides a practical solution for building an adaptive agent.
• A new multi-agent environment Agar.io, which allows complex multi-agent strategic behavior. We released the environment to the community as a novel testbed for MARL research.
2 A MOTIVATING EXAMPLE: STAG HUNT
Stag Hare Stag a, a c, b Hare b, c d, d
Table 1: The stag-hunt game, a > b ≥ d > c.
We start by analyzing a simple problem: finding the NE with the optimal payoffs in the Stag Hunt game. This game was originally introduced in Rousseau’s work, “A discourse on inequality” (Rousseau, 1984): a group of hunters are tracking a big stag silently; now a hare shows up, each hunter should decide whether to keep tracking the stag or kill the hare immediately. This leads to the 2-by-2 matrix-form stag-hunt game in Tab. 1
with two actions for each agent, Stag (S) and Hare (H). There are two pure strategy NEs: the Stag NE, where both agents choose S and receive a high payoff a (e.g., a = 4), and the Hare NE, where both agents choose H and receive a lower payoff d (e.g., d = 1). The Stag NE is “risky” because if one agent defects, they still receives a decent reward b (e.g., b = 3) for eating the hare alone while the other agent with an S action may suffer from a big loss c for being hungry (e.g., c = −10). Formally, let A = {S,H} denote the action space, πi(θi) denote the policy for agent i (i ∈ {1, 2}) parameterized by θi, i.e., P [πi(θi) = S] = θi and P [πi(θi) = H] = 1− θi, and R(a1, a2; i) denote the payoff for agent i when agent 1 takes action a1 and agent 2 takes action a2. Each agent i optimizes its expected utility Ui(π1, π2) = Ea1∼π1,a2∼π2 [R(a1, a2; i)]. Using the standard policy gradient algorithm, a typical learning procedure is to repeatedly take the following two steps until
convergence1: (1) estimate gradient ∇i = ∇Ui(π1, π2) via self-play; (2) update the policies by θi ← θi + α∇i with learning rate α. Although PG is widely used in practice, the following theorem shows in certain scenarios, unfortunately, the probability that PG converges to the Stag NE is low.
Theorem 1. Suppose a− b = (d− c) for some 0 < < 1 and initialize θ1, θ2 ∼ Unif [0, 1]. Then the probability that PG discovers the high-payoff NE is upper bounded by 2 + 2
1+2 + 2 .
Theorem 1 shows when the risk is high (i.e., c is low), then the probability of finding the Stag NE via PG is very low. Note this theorem applies to random initialization, which is standard in RL.
Remark: One needs at least N = Ω ( 1 ) restarts to ensure a constant success probability.
Fig. 2 shows empirical studies: we select 4 value assignments, i.e., c ∈ {−5,−20,−50,−100} and a=4, b=3, d=1, and run a state-of-the-art PG method, proximal policy optimization (PPO) (Schulman et al., 2017), on these games. The Stag NE is rarely reached, and, as c becomes smaller, the probability of finding the Stag NE significantly decreases. Peysakhovich & Lerer (2018b) provided a theorem of similar flavor without analyzing the dynamics of the learning algorithm whereas we explicitly characterize the behavior of PG. They studied a prosocial reward-sharing scheme, which transforms the reward of both agents toR(a1, a2; 1)+R(a1, a2; 2). Reward sharing can be viewed as a special case of our method and, as shown in Sec. 5, it is insufficient for solving complex temporal games.
2.1 REWARD RANDOMIZATION IN THE MATRIX-FORM STAG-HUNT GAME
9 Thm. 1 suggests that the utility function R highly influences what strategy PG might learn. Taking one step further, even if a strategy is difficult to learn with a particular R, it might be easier in some other function R′. Hence, if we can define an appropriate spaceR over different utility functions and draw samples from R, we may possibly discover desired novel strategies by running PG on some sampled utility function R′ and evaluating the obtained policy profile on the original game with R. We call this procedure Reward Randomization (RR).
Concretely, in the stag-hunt game, R is parameterized by 4 variables (aR, bR, cR, dR). We can define a distribution over R4, draw a tuple R′ = (aR′ , bR′ , cR′ , dR′) from this distribution, and run PG on R′. Denote the original stag-hunt game where the Stag NE is hard to discover as R0. Reward randomization draws N perturbed tuples R1, . . . , RN , runs PG on each Ri, and evaluates each of the obtained strategies on R0. The theorem below shows it is highly likely that the population of the N policy profiles obtained from the perturbed games contains the Stag NE strategy.
Theorem 2. For any Stag-Hunt game, suppose in the i-th run of RR we randomly generate aRi , bRi , cRi , dRi ∼ Unif [−1, 1] and initialize θ1, θ2 ∼ Unif [0, 1], then with probability at least 1− 0.6N = 1− exp (−Ω (N)), the aforementioned RR procedure discovers the high-payoff NE.
Here we use the uniform distribution as an example. Other distributions may also help in practice. Comparing Thm. 2 and Thm. 1, RR significantly improves standard PG w.r.t. success probability.
Remark 1: For the scenario studied in Thm. 1, to achieve a (1− δ) success probability for some 0 < δ < 1, PG requires at least N = Ω ( 1 log ( 1 δ )) random restarts. For the same scenario, RR only requires to repeat at most N = O (log (1/δ)) which is independent of . When is small, this is a huge improvement.
Remark 2: Thm. 2 suggests that comparing with policy randomization, perturbing the payoff matrix makes it substantially easier to discover a strategy that can be hardly reached in the original game.
Note that although in Stag Hunt, we particularly focus on the Stag NE that has the highest payoff for both agents, in general RR can also be applied to NE selection in other matrix-form games using a payoff evaluation functionE(π1, π2). For example, we can setE(π1, π2) = U1(π1, π2)+U2(π1, π2) for a prosocial NE, or look for Pareto-optimal NEs by setting E(π1, π2) = βU1(π1, π2) + (1 − β)U2(π1, π2) with 0 ≤ β ≤ 1.
1In general matrix games beyond stag hunt, the procedure can be cyclic as well (Singh et al., 2000).
Algorithm 1: RPG: Reward-Randomized Policy Gradient Input: original game M , search spaceR, evaluation function E, population size N ; draw samples {R(1), . . . , R(N)} fromR; {π(i)1 , π (i) 2 } ← PG on induced games {M(R(i))}i in parallel ; // RR phase select the best candidate π(k)1 , π (k) 2 by k = arg maxiE(π (i) 1 , π (i) 2 ) ; // evaluation phase π?1 , π ? 2 ← fine-tune π (k) 1 , π (k) 2 on M via PG (if necessary) ; // fine-tuning phase return π?1 , π?2 ;
3 RPG: REWARD-RANDOMIZED POLICY GRADIENT
Herein, we extend Reward Randomization to general multi-agent Markov games. We now utilize RL terminologies and consider the 2-player setting for simplicity. Extension to more agents is straightforward (Appx. B.3).
Consider a 2-agent Markov game M defined by (S,O,A, R, P ), where S is the state space; O = {oi : s ∈ S, oi = O(s, i), i ∈ {1, 2}} is the observation space, where agent i receives its own observation oi = O(s; i) (in the fully observable setting, O(s, i) = s); A is the action space for each agent; R(s, a1, a2; i) is the reward function for agent i; and P (s′|s, a1, a2) is transition probability from state s to state s′ when agent i takes action ai. Each agent has a policy πi(oi; θi) which produces a (stochastic) action and is parameterized by θi. In the decentralized RL framework, each agent i optimizes its expected accumulative reward Ui(θi) = Ea1∼π1,a2∼π2 [ ∑ t γ tR(st, at1, a t 2; i)] with some discounted factor γ.
Consider we run decentralized RL on a particular a Markov game M and the derived policy profile is (π1(θ1), π2(θ2)). The desired result is that the expected reward Ui(θi) for each agent i is maximized. We formally written this equilibrium evaluation objective as an evaluation function E(π1, π2) and therefore the goal is to find the optimal policy profile (π?1 , π ? 2) w.r.t. E. Particularly for the games we considered in this paper, since every (approximate) equilibrium we ever discovered has a symmetric payoff, we focus on the empirical performance while assume a much simplified equilibrium selection problem here: it is equivalent to define E(π1, π2) by E(π1, π2) = βU1(θ1) + (1− β)U2(θ2) for any 0 ≤ β ≤ 1. Further discussions on the general equilibrium selection problem can be found in Sec. 6. The challenge is that although running decentralized PG is a popular learning approach for complex Markov games, the derived policy profile (π1, π2) is often sub-optimal, i.e., there exists (π?1 , π ? 2) such that E(π?1 , π ? 2) > E(π1, π2). It will be shown in Sec. 5 that even using state-of-the-art exploration techniques, the optimal policies (π?1 , π ? 2) can be hardly achieved.
Following the insights from Sec. 2, reward randomization can be applied to a Markov game M similarly: if the reward function in M poses difficulties for PG to discover some particular strategy, it might be easier to reach this desired strategy with a perturbed reward function. Hence, we can then define a reward function spaceR, train a population of policy profiles in parallel with sampled reward functions from R and select the desired strategy by evaluating the obtained policy profiles in the original game M . Formally, instead of purely learning in the original game M = (S,O,A, R, P ), we define a proper subspace R over possible reward functions R : S × A × A → R and use M(R′) = (S,O,A, R′, P ) to denote the induced Markov game by replacing the original reward functionRwith anotherR′ ∈ R. To apply reward randomization, we drawN samplesR(1), . . . , R(N) from R, run PG to learn (π(i)1 , π (i) 2 ) on each induced game M(R
(i)), and pick the desired policy profile (π(k)1 , π (k) 2 ) by calculating E in the original game M . Lastly, we can fine-tune the policies π (k) 1 , π (k) 2 in M to further boost the practical performance (see discussion below). We call this learning procedure, Reward-Randomized Policy Gradient (RPG), which is summarized in Algo. 1.
Reward-function space: In general, the possible space for a valid reward function is intractably huge. However, in practice, almost all the games designed by human have low-dimensional reward structures based on objects or events, so that we can (almost) always formulate the reward function in a linear form R(s, a1, a2; i) = φ(s, a1, a2; i)Tw where φ(s, a1, a2; i) is a low-dimensional feature vector and w is some weight.
A simple and general design principle for R is to fix the feature vector φ while only randomize the weight w, i.e., R = {Rw : Rw(s, a1, a2; i) = φ(s, a1, a2; i)Tw, ‖w‖∞ ≤ Cmax}. Hence, the overall search space remains a similar structure as the original game M but contains a diverse range of preferences over different feature dimensions. Notably, since the optimal strategy is invariant to the scale of the reward function R, theoretically any Cmax > 0 results in the same search space.
However, in practice, the scale of reward may significantly influence MARL training stability, so we typically ensure the chosen Cmax to be compatible with the PG algorithm in use.
Note that a feature-based reward function is a standard assumption in the literature of inverse RL (Ng et al., 2000; Ziebart et al., 2008; Hadfield-Menell et al., 2017). In addition, such a reward structure is also common in many popular RL application domains. For example, in navigation games (Mirowski et al., 2016; Lowe et al., 2017; Wu et al., 2018), the reward is typically set to the negative distance from the target location LT to the agent’s location LA plus a success bonus, so the feature vector φ(s, a) can be written as a 2-dimensional vector [‖LT − LA‖2, I(LT = LA)]; in real-time strategy games (Wu & Tian, 2016; Vinyals et al., 2017; OpenAI et al., 2019), φ is typically related to the bonus points for destroying each type of units; in robotics manipulation (Levine et al., 2016; Li et al., 2020; Yu et al., 2019), φ is often about the distance between the robot/object and its target position; in general multi-agent games (Lowe et al., 2017; Leibo et al., 2017; Baker et al., 2020), φ could contain each agent’s individual reward as well as the joint reward over each team, which also enables the representation of different prosociality levels for the agents by varying the weight w.
Fine tuning: There are two benefits: (1) the policies found in the perturbed game may not remain an equilibrium in the original game, so fine-tuning ensures convergence; (2) in practice, fine-tuning could further help escape a suboptimal mode via the noise in PG (Ge et al., 2015; Kleinberg et al., 2018). We remark that a practical issue for fine-tuning is that when the PG algorithm adopts the actor-critic framework (e.g., PPO), we need an additional critic warm-start phase, which only trains the value function while keeps the policy unchanged, before the fine-tuning phase starts. This warm-start phase significantly stabilizes policy learning by ensuring the value function is fully functional for variance reduction w.r.t. the reward function R in the original game M when estimating policy gradients.
3.1 LEARNING TO ADAPT WITH DIVERSE OPPONENTS
Algorithm 2: Learning to Adapt Input: game M , policy set Π2, initial πa1 ; repeat
draw a policy π′2 from Π2; evaluate πa1 and π′2 on M and collect data; update θa via PG if enough data collected;
until enough iterations; return πa1 (θa);
In addition to the final policies π?1 , π ? 2 , another benefit from RPG is that the population of N policy profiles contains diverse strategies (more in Sec. 5). With a diverse set of strategies, we can build an adaptive agent by training with a random opponent policy sampled from the set per episode, so that the agent is forced to behave differently based on its opponent’s behavior. For simplicity, we consider learning an adaptive policy πa1 (θ
a) for agent 1. The procedure remains the same for agent 2. Suppose a policy population P = {π(1)2 , . . . , π (N) 2 } is obtained during the RR phase, we first construct a diverse strategy set Π2 ⊆ P that contains all the discovered behaviors from P . Then we construct a mixed strategy by randomly sampling a policy π′2 from Π2 in every training episode and run PG to learn πa1 by competing against this constructed mixed strategy. The procedure is summarized in Algo. 2. Note that setting Π2 = P appears to be a simple and natural choice. However, in practice, since P typically contains just a few strategic behaviors, it is unnecessary for Π2 to include every individual policy from P . Instead, it is sufficient to simply ensure Π2 contains at least one policy from each equilibrium in P (more details in Sec. 5.3). Additionally, this method does not apply to the one-shot game setting (i.e., horizon is 1) because the adaptive agent does not have any prior knowledge about its opponent’s identity before the game starts.
Implementation: We train an RNN policy for πa1 (θa). It is critical that the policy input does not directly reveal the opponent’s identity, so that it is forced to identify the opponent strategy through what it has observed. On the contrary, when adopting an actor-critic PG framework (Lowe et al., 2017), it is extremely beneficial to include the identity information in the critic input, which makes critic learning substantially easier and significantly stabilizes training. We also utilize a multi-head architecture adapted from the multi-task learning literature (Yu et al., 2019), i.e., use a separate value head for each training opponent, which empirically results in the best training performance.
4 TESTBEDS FOR RPG: TEMPORAL TRUST DILEMMAS
We introduce three 2-player Markov games as testbeds for RPG. All these games have a diverse range of NE strategies including both “risky” cooperative NEs with high payoffs but hard to discover and “safe” non-cooperative NEs with lower payoffs. We call them temporal trust dilemmas. Game descriptions are in a high level to highlight the game dynamics. More details are in Sec. 5 and App. B.
Gridworlds: We consider two games adapted from Peysakhovich & Lerer (2018b), Monster-Hunt (Fig. 3) and Escalation (Fig. 4). Both games have a 5-by-5 grid and symmetric rewards.
Monster-Hunt contains a monster and two apples. Apples are static while the monster keeps moving towards its closest agent. If a single agent meets the monster, it loses a penalty of 2; if two agents catch the monster together, they both earn a bonus of 5. Eating an apple always raises a bonus of 2. Whenever an apple is eaten or the monster meets an agent, the entity will respawn randomly. The optimal payoff can only be achieved when both agents precisely catch the monster simultaneously.
Escalation contains a lit grid. When two agents both step on the lit grid, they both get a bonus of 1 and a neighboring grid will be lit up in the next timestep. If only one agent steps on the lit grid, it gets a penalty of 0.9L, where L denotes the consecutive cooperation steps until that timestep, and the lit grid will respawn randomly. Agents need to stay together on the lit grid to achieve the maximum payoff despite of the growing penalty. There are multiple NEs: for each L, that both agents cooperate for L steps and then leave the lit grid jointly forms an NE.
Agar.io is a popular multiplayer online game. Players control cells in a Petri dish to gain as much mass as possible by eating smaller cells while avoiding being eaten by larger ones. Larger cells move slower. Each player starts with one cell but can split a sufficiently large cell into two, allowing them to control multiple cells (Wikipedia, 2020). We consider a simplified scenario (Fig. 5) with 2 players (agents) and tiny script cells, which automatically runs away when an agent comes by. There is a low-risk non-cooperative strategy, i.e., two agents stay away from each other and hunt script cells independently. Since the script cells move faster, it is challenging for a single agent to hunt them. By contrast, two agents can cooperate to encircle the script cells to accelerate hunting. However, cooperation is extremely risky for the agent with less mass: two agents need to stay close to cooperate but the larger agent may defect by eating the smaller one and gaining an immediate big bonus.
5 EXPERIMENT RESULTS
In this section, we present empirical results showing that in all the introduced testbeds, including the real-world game Agar.io, RPG always discovers diverse strategic behaviors and achieves an equilibrium with substantially higher rewards than standard multi-agent PG methods. We use PPO (Schulman et al., 2017) for PG training. Training episodes for RPG are accumulated over all the perturbed games. Evaluation results are averaged over 100 episodes in gridworlds and 1000 episodes in Agar.io. We repeat all the experiments with 3 seeds and use X (Y ) to denote mean X with standard deviation Y in all tables. Since all our discovered (approximate) NEs are symmetric for both players, we simply take E(π1, π2) = U1(π1, π2) as our evaluation function and only measure the reward of agent 1 in all experiments for simplicity. More details can be found in appendix.
5.1 GRIDWORLD GAMES
Monster-Hunt: Each agent’s reward is determined by three features per timestep: (1) whether two agents catch the monster together; (2) whether the agent steps on an apple; (3) whether the agent meets the monster alone. Hence, we write φ(s, a1, a2; i) as a 3-dimensional 0/1 vector with one dimension for one feature. The original game corresponds to w = [5, 2,−2]. We set Cmax = 5 for sampling w. We compare RPG with a collection of baselines, including standard PG (PG), PG with shared reward (PG+SR), population-based training (PBT), which trains the same amount of parallel PG policies as RPG, as
well as popular exploration methods, i.e., count-based exploration (PG+CNT) (Tang et al., 2017) and MAVEN (Mahajan et al., 2019). We also consider an additional baseline, DIAYN (Eysenbach et al., 2019), which discovers diverse skills using a trajectory-based diversity reward. For a fair comparison, we use DIAYN to first pretrain diverse policies (conceptually similar to the RR phase), then evaluate the rewards for every pair of obtained policies to select the best policy pair (i.e., evaluation phase, shown with the dashed line in Fig. 6), and finally fine-tune the selected policies until convergence (i.e., fine-tuning phase). The results of RPG and the 6 baselines are summarized in Fig. 6, where RPG consistently discovers a strategy with a significantly higher payoff. Note that the strategy with the optimal payoff may not always directly emerge in the RR phase, and there is neither a particular value of w constantly being the best candidate: e.g., in the RR phase, w = [5, 0, 2] frequently produces a sub-optimal cooperative strategy (Fig. 7(a)) with a reward lower than other w values, but it can also occasionally lead to the optimal strategy (Fig. 7(b)). Whereas, with the fine-tuning phase, the overall procedure of RPG always produces the optimal solution. We visualize both two emergent cooperative strategies in Fig. 7: in the sub-optimal one (Fig. 7(a)), two agents simply move to grid (1,1) together, stay still and wait for the monster, while in the optimal one (Fig. 7(b)), two agents meet each other first and then actively move towards the monster jointly, which further improves hunting efficiency.
Escalation: We can represent φ(s, a1, a2; i) as 2-dimensional vector containing (1) whether two agents are both in the lit grid and (2) the total consecutive cooperation steps. The original game corresponds to w = [1,−0.9]. We set Cmax = 5 and show the total number of cooperation steps per episode for several selected w values throughout training in Fig. 8, where RR is able to discover different NE strategies. Note that w = [1, 0] has already produced the strategy with the optimal payoff in this game, so the fine-tuning phase is no longer needed.
5.2 2-PLAYER GAMES IN Agar.io
There are two different settings of Agar.io: (1) the standard setting, i.e., an agent gets a penalty of −x for losing a mass x, and (2) the more challenging aggressive setting, i.e., no penalty for mass loss. Note in both settings: (1) when an agent eats a mass x, it always gets a bonus of x; (2) if an agent loses all the mass, it immediately dies while the other agent can still play in the game. The aggressive setting promotes agent interactions and typically leads to more diverse strategies in practice. Since both settings strictly define the penalty function for mass loss, we do not randomize this reward term. Instead, we consider two other factors: (1) the bonus for eating the other agent; (2) the prosocial level of both agents. We use a 2-dimensional vector w = [w0, w1], where 0 ≤ w0, w1 ≤ 1, to denote a particular reward function such that (1) when eating a cell of mass x from the other agent, the bonus is w0 × x, and (2) the final reward is a linear interpolation between R(·; i) and 0.5(R(·; 0) +R(·; 1)) w.r.t. w1, i.e., when w1 = 0, each agent optimizes its individual reward while when w1 = 1, two agents have a shared reward. The original game in both Agar.io settings corresponds to w = [1, 0].
Standard setting: PG in the original game (w = [1, 0]) leads to a typical trust-dilemma dynamics: the two agents first learn to hunt and occasionally Cooperate (Fig. 9(a)), i.e., eat a script cell with the other agent close by; then accidentally one agent Attacks the other agent (Fig. 9(b)), which yields a big
PBT w=[0.5, 1] w=[0, 1] w=[0, 0] RPG RND
Rew. 3.3(0.2) 4.8(0.6) 5.1(0.4) 6.0(0.5) 8.9(0.3) 3.2(0.2) #Attack 0.4(0.0) 0.7(0.2) 0.3(0.1) 0.5(0.1) 0.9(0.1) 0.4(0.0) #Coop. 0.0(0.0) 0.6(0.6) 2.3(0.3) 1.6(0.1) 2.0(0.2) 0.0(0.0) #Hunt 0.7(0.1) 0.6(0.3) 0.3(0.0) 0.7(0.0) 0.9(0.1) 0.7(0.0)
Table 3: Results in the aggressive setting of Agar.io: PBT: population training of parallel PG policies; RR: w=[0, 0] is the best candidate via RR; RPG: fine-tuned policy; RND: PG with RND bonus.
immediate bonus and makes the policy aggressive; finally policies converge to the non-cooperative equilibrium where both agents keep apart and hunt alone. The quantitative results are shown in Tab. 2. Baselines include population-based training (PBT) and a state-the-art exploration method for high-dimensional state, Random Network Distillation (RND) (Burda et al., 2019). RND and PBT occasionally learns cooperative strategies while RR stably discovers a cooperative equilibrium with w = [1, 1], and the full RPG further improves the rewards. Interestingly, the best strategy obtained in the RR phase even has a higher Cooperate frequency than the full RPG: fine-tuning transforms the strong cooperative strategy to a more efficient strategy, which has a better balance between Cooperate and selfish Hunt and produces a higher average reward.
Aggressive setting: Similarly, we apply RPG in the aggressive setting and show results in Tab. 3. Neither PBT nor RND was able to find any cooperative strategies in the aggressive game while RPG stably discovers a cooperative equilibrium with a significantly higher reward. We also observe a diverse set of complex strategies in addition to normal Cooperate and Attack. Fig. 10 visualizes the Sacrifice strategy derived with w = [1, 1]: the smaller agent rarely hunts script cells; instead, it waits in the corner for being eaten by the larger agent to contribute all its mass to its partner. Fig. 11 shows another surprisingly novel emergent strategy by w = [0.5, 1]: each agent first hunts individually to gain enough mass; then one agent splits into smaller cells while the other agent carefully eats a portion of the split agent; later on, when the agent who previously lost mass gains sufficient mass, the larger agent similarly splits itself to contribute to the other one, which completes the (ideally) never-ending loop of partial sacrifice. We name this strategy Perpetual for its conceptual similarity to the perpetual motion machine. Lastly, the best strategy is produced by w = [0, 0] with a balance between Cooperate and Perpetual: they cooperate to hunt script cells to gain mass efficiently and quickly perform mutual sacrifice as long as their mass is sufficiently large for split-and-eat. Hence, although the RPG policy has relatively lower Cooperate frequency than the policy by w = [0, 1], it yields a significantly higher reward thanks to a much higher Attack (i.e., Sacrifice) frequency.
5.3 LEARNING ADAPTIVE POLICIES
Monster-Hunt: We select policies trained by 8 differentw values in the RR phase and use half of them for training the adaptive policy and the remaining half as hidden opponents for evaluation. We also make sure that both training and evaluation policies cover the following 4 strategy modes: (1) M(onster): the agent always moves towards the monster; (2) M(onster)-Alone: the agent moves towards the monster but
also tries to keeps apart from the other agent; (3) M(onster)-Coop.: the agent seeks to hunt the monster together with the other agent; (4) Apple: the agent only eats apple. The evaluation results are shown in Tab. 4, where the adaptive policy successfully exploits all the test-time opponents, including M(onster)-Alone, which was trained to actively avoids the other agent.
Agent Adapt. Coop. Comp. Opponent: Cooperative −→ Competitive #Attack 0.2(0.0) 0.3(0.0) 0.1(0.1)
Rew. 0.7(0.7) -0.2(0.6) 0.8(0.5) Opponent: Competitive −→ Cooperative #Coop. 1.0(0.3) 1.4(0.4) 0.3(0.4) Rew. 2.5(0.7) 3.6(1.2) 1.1(0.7)
Table 5: Adaptation test in Agar.io. Opponent type is switched half-way per episode. #Attack, #Coop.: episode statistics; Rew.: agent reward. Adaptive agents’ rewards are close to oracles.
Agar.io: We show the trained agent can choose to cooperate or compete adaptively in the standard setting. We pick 2 cooperative policies (i.e., Cooperate preferred, w=[1, 0]) and 2 competitive policies (i.e., Attack preferred, w=[1, 1]) and use half of them for training and the other half for testing. For a hard challenge at test time, we switch the opponent within an episode, i.e., we use a cooperative opponent in the first half and then immediately switch to a competitive one, and vice versa. So, a desired policy should adapt quickly at halftime. Tab. 5 compares the second-half behavior of the adaptive agent with the oracle pure-competitive/cooperative agents. The rewards of the adaptive agent is close to the oracle: even with half-way switches, the trained policy is
able to exploit the cooperative opponent while avoid being exploited by the competitive one.
6 RELATED WORK AND DISCUSSIONS
Our core idea is reward perturbation. In game theory, this is aligned with the quantal response equilibrium (McKelvey & Palfrey, 1995), a smoothed version of NE obtained when payoffs are perturbed by a Gumbel noise. In RL, reward shaping is popular for learning desired behavior in various domains (Ng et al., 1999; Babes et al., 2008; Devlin & Kudenko, 2011), which inspires our idea for finding diverse strategic behavior. By contrast, state-space exploration methods (Pathak et al., 2017; Burda et al., 2019; Eysenbach et al., 2019; Sharma et al., 2020) only learn low-level primitives without strategy-level diversity (Baker et al., 2020).
RR trains a set of policies, which is aligned with the population-based training in MARL (Jaderberg et al., 2017; 2019; Vinyals et al., 2019; Long et al., 2020; Forestier et al., 2017). RR is conceptually related to domain randomization (Tobin et al., 2017) with the difference that we train separate policies instead of a single universal one, which suffers from mode collapse (see appendix D.2.3). RPG is also inspired by the map-elite algorithm (Cully et al., 2015) from evolutionary learning community, which optimizes multiple objectives simultaneously for sufficiently diverse polices. Our work is also related to Forestier et al. (2017), which learns a set of policies w.r.t. different fitness functions in the singleagent setting. However, they only consider a restricted fitness function class, i.e., the distance to each object in the environment, which can be viewed as a special case of our setting. Besides, RPG helps train adaptive policies against a set of opponents, which is related to Bayesian games (Dekel et al., 2004; Hartline et al., 2015). In RL, there are works on learning when to cooperate/compete (Littman, 2001; Peysakhovich & Lerer, 2018a; Kleiman-Weiner et al., 2016; Woodward et al., 2019; McKee et al., 2020), which is a special case of ours, or learning robust policies (Li et al., 2019; Shen & How, 2019; Hu et al., 2020), which complements our method.
Although we choose decentralized PG in this paper, RR can be combined with any other multi-agent learning algorithms for games, such as fictitious play (Robinson, 1951; Monderer & Shapley, 1996; Heinrich & Silver, 2016; Kamra et al., 2019; Han & Hu, 2019), double-oracle (McMahan et al., 2003; Lanctot et al., 2017; Wang et al., 2019; Balduzzi et al., 2019) and regularized self-play (Foerster et al., 2018; Perolat et al., 2020; Bai & Jin, 2020). Many of these works have theoretical guarantees to find an (approximate) NE but there is little work focusing on which NE strategy these algorithms can converge to when multiple NEs exist, e.g., the stag-hunt game and its variants, for which many learning dynamics fail to converge to a prevalence of the pure strategy Stag (Kandori et al., 1993; Ellison, 1993; Fang et al., 2002; Skyrms & Pemantle, 2009; Golman & Page, 2010)..
In this paper, we primarily focus on how reward randomization empirically helps MARL discover better strategies in practice and therefore only consider stag hunt as a particularly challenging example where an “optimal” NE with a high payoff for every agent exists. In general cases, we can select a desired strategy w.r.t. an evaluation function. This is related to the problem of equilibrium refinement (or equilibrium selection) (Selten, 1965; 1975; Myerson, 1978), which aims to find a subset of equilibria satisfying desirable properties, e.g., admissibility (Banks & Sobel, 1987), subgame perfection (Selten, 1965), Pareto efficiency (Bernheim et al., 1987) or robustness against opponent’s deviation from best response in security-related applications (Fang et al., 2013; An et al., 2011).
ACKNOWLEDGMENTS
This work is supported by National Key R&D Program of China (2018YFB0105000). Co-author Fang is supported, in part, by a research grant from Lockheed Martin. Co-author Wang is supported, in part, by gifts from Qualcomm and TuSimple. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the funding agencies. The authors would like to thank Zhuo Jiang and Jiayu Chen for their support and input during this project. Finally, we particularly thank Bowen Baker for initial discussions and suggesting the Stag Hunt game as our research testbed, which eventually leads to this paper.
A PROOFS
Proof of Theorem 1. We apply self-play policy gradient to optimize θ1 and θ2. Here we consider a projected version, i.e., if at some time t, θ1 or θ2 6∈ [0, 1], we project it to [0, 1] to ensure it is a valid distribution.
We first compute the utility given a pair (θ1, θ2)
U1(θ1, θ2) =aθ1θ2 + cθ1(1− θ2) + b(1− θ1)θ2 + d(1− θ1)(1− θ2) U2(θ1, θ2) =aθ1θ2 + bθ1(1− θ2) + c(1− θ1)θ2 + d(1− θ1)(1− θ2).
We can compute the policy gradient
∇U1(θ1, θ2) =aθ2 + c(1− θ2)− bθ2 − d(1− θ2) = (a+ d− b− c)θ2 + c− d ∇U2(θ1, θ2) =aθ2 − bθ1 + c(1− θ1)− d(1− θ1) = (a+ d− b− c)θ1 + c− d
Recall in order to find the optimal solution both θ1 and θ2 need to increase. Also note that the initial θ1 and θ2 determines the final solution. In particular, only if θ1 and θ2 are increasing at the beginning, they will converge to the desired solution.
To make either θ1 or θ2 increase, we need to have
(a+ d− b− c)θ1 + c− d > 0 or (a+ d− b− c)θ2 + c− d > 0 (1)
Consider the scenario a− b = (d− c). In order to make Inequality equation 1 to hold, we need at least either θ1, θ2 ≥ 11+ .
If we initialize θ1 ∼ [0, 1] and θ2 ∼ [0, 1], the probability of either θ1, θ2 ≥ 11+ is 1− ( 1 1+ )2 =
2 + 2
1+2 + 2 = O ( ).
Proof of Theorem 2. Using a similar observation as in Theorem 1, we know a necessary condition to make PG converge to a sub-optimal NE is
(a+ d− b− c)θ1 + c− d < 0 or (a+ d− b− c)θ2 + c− d < 0.
Based on our generating scheme on a, b, c, d and the initialization scheme on θ1, θ2, we can verify that Therefore, via a union bound, we know
P ((a+ d− b− c)θ1 + c− d < 0 or (a+ d− b− c)θ2 + c− d < 0) ≤ 0.6. (2)
Since each round is independent, the probability that PG fails for all N times is upper bounded by 0.6N . Therefore, the success probability is lower bounded by 1− 0.6N = 1− exp (−Ω (N)).
B ENVIRONMENT DETAILS
B.1 Iterative Stag-Hunt
In Iterative Stag-Hunt, two agents play 10 rounds, that is, both PPO’s trajectory length and episode length are 10. Action of each agent is a 1-dimensional vector, ai = {ti, i ∈ {0, 1}}, where ti = 0 denotes taking Stag action and ti = 1 denotes taking Hare action. Observation of each agent is actions taking by itself and its opponent in the last round, i.e., ori = {a r−1 i , a r−1 1−i ; i ∈ {0, 1}}, where r denotes the playing round. Note that neither agent has taken action at the first round, so the observation oi = {−1,−1}.
B.2 Monster-Hunt
In Monster-Hunt, two agents can move one step in any of the four cardinal directions (Up, Down, Left, Right) at each timestep. Let ai = {ti, i ∈ {0, 1}} denote action of agent i, where ti is a discrete 4-dimensional one-hot vector. The position of each agent can not exceed the border of 5-by-5 grid, where action execution is invalid. One Monster and two apples respawn in the different grids at the initialization. If an agent eats (move over in the grid world) an apple, it can gain 2 points. Sometimes, two agents may try to eat the same apple, the points will be randomly assigned to only one agent. Catching the monster alone causes an agent lose 2 points, but if two agents catch the stag simultaneously, each agent can gain 5 points. At each time step, the monster and apples will respawn randomly elsewhere in the grid world if they are wiped. In addition, the monster chases the agent closest to it at each timestep. The monster may move over the apple during the chase, in this case, the agent will gain the sum of points if it catches the monster and the apple exactly. Each agent’s observation oi is a 10-dimensional vector and formed by concatenating its own position pi, the other agent’s position p1−i, monster’s positionpmonster and sorted apples’ position papple0, papple1, i.e., oi = {pi, p1−i, pmonster, papple0, papple1; i ∈ {0, 1}}, where p = (u, v) denotes the 2-dimensional coordinates in the gridworld.
B.3 Monster-Hunt WITH MORE THAN 2 AGENTS
Here we consider extending RPG to the general setting of N agents. In most of the multi-agent games, the reward function are fully symmetric for the same type of agents. Hence, as long as we can formulate the reward function in a linear form over a feature vector and a shared weight, i.e., R(s, a1, . . . , aN ; i) = φ(s, a1, . . . , aN ; i)
Tw, we can directly apply RPG without any modification by setting R = {Rw : Rw(s, a1, . . . , aN ; i) = φ(s, a1, . . . , aN ; i)Tw}. Note that typically the dimension of the feature vector φ(·) remains fixed w.r.t. different number of agents (N ). For example, in the Agar.io game, no matter how many players are there in the game, the rule of how to get reward bonus and penalties remains the same.
Here, we experiment RPG in Monster-Hunt with 3 agents. The results are shown in Fig. 12. We consider baselines including the standard PG (PG) and population-based training (PBT). RPG reliably discovers a strong cooperation strategy with a substantially higher reward than the baselines.
B.4 Escalation
In Escalation, two agents appear randomly and one grid lights up at the initialization. If two agents step on the lit grid simultaneously, each agent can gain 1 point, and the lit grid will go out with an adjacent grid lighting up. Both agents can gain 1 point again if they step on the next lit grid together. But if one agent steps off the path, the other agent will lose 0.9L points, where L is the current length of stepping together, and the game is over. Another option is that two agents choose to step off the path simultaneously, neither agent will be punished, and the game continues. As the length L of stepping together increases, the cost of betrayal increases linearly. ai = {ti, i ∈ {0, 1}} denotes
action of agent i, where ti is a discrete 4-dimensional one-hot vector. The observation ai of agent i is composed of its own position pi, the other agent’s positionp1−i and the lit grid’s position plit, i.e., oi = {pi, p1−i, plit; i ∈ {0, 1}}, where p = (u, v) denotes the 2-dimensional coordinates in the gridworld. Moreover, we utilize GRU to encode the length L implicitly, instead of observing that explicitly.
B.5 Agar.io
In the original online game Agar.io, multiple players are limited in a circle petri dish. Each player controls one or more balls using only a cursor and 2 keyboard keys "space" and "w". all balls belonging to the player will move forward to where the cursor pointing at. Balls larger than a threshold will split to 2 smaller balls and rush ahead when the player pressing the key "space". Balls larger than another threshold will emit tiny motionless food-like balls when the player pressing "w". Agar.io has many play modes like "Free-For-All" mode (All players fight for their own and can eat each other) and "Team" mode (Players are separated to two groups. They should cooperate with other players in the same group and eat other players belonging to another group).
We simplified settings of the original game Agar.io: Now agents don’t need to emit tiny motionless balls and all fight with each other (FFA mode). The action space of the game is target × {split, no_split}. target ∈ [0, 1]2 means the target position that all balls belonging to the agent move to. binary action split or no_split means whether the player chooses to split, which will cause all balls larger than a threshold split to 2 smaller ones and rush ahead for a short while. These split balls will re-merge after some time, then the agent can split again. When one agent’s ball meets another agent’s ball and the former one is at least 1.2 times larger than the later, the later will be eaten and the former will get all its mass. The reward is defined as the increment of balls’ mass. So every agent’s goal is getting larger by eating others while avoid being eaten. But larger ball moves slower. So it’s really hard to catch smaller balls only by chasing after it. Split will help, but it needs high accuracy to rush to the proper direction. In our experiments, there were 7 agents interacting with each other. 2 agents were learned by our algorithm and would quit the game if all balls were eaten. 5 agents were controlled by a script and would reborn at a random place if all balls were eaten. Learn-based agents were initialized larger than script-based agents so it was basically one-way catching. In this setting, cooperation was the most efficient behavior for learn-based agents to gain positive reward, where they coordinated to surround script-based agents and caught them.
Observation space: We denote partial observation of agent i as oi, which includes global information of the agent (denoted as oi,global) and descriptions of all balls around the agent (including balls owned by the agent, denoted as oi,balls. and oi,balls = {oi,ball,1, oi,ball,2, ..., oi,ball,m}, where oi,ball,j denotes the j-th ball around the agent and there are m observed balls in all). oi,global = {li,obs, wi,obs, pi,center, vi, si,alive, ni,own, ni,script, ni,other, ai,last, ri,max, ri,min,mi} where li,obs, wi,obs (they are both 1D filled with a real number, from here the form like (1D, real) will be used as the abbreviation) are the length and width of the agent’s observation scope, pi,center (2D, real) is its center position, vi (2D, real) is the speed of its center, si,alive(1D, binary) is whether the other learn-based agent is killed, ni,own, ni,script, ni,other(1D, real) are numbers of each type of balls nearby (3 types: belonging to me, or belonging to a script agent, or belonging to another learn-based agent), ai,last(3D, real) is the agent’s last action, ri,max, ri,min(1D, real) are maximal and minimal radius of all balls belonging to the agent. for any j = 1, 2, ...,m, oi,ball,j = {pi,j,relative, pi,j,absolute, vi,j , vi,j,rush, ri,j , log(ri,j), di,j , ei,j,max, ei,j,min, si,j,rem, ti,j}, where pi,j,relative, pi,j,absolute(2D, real) are the ball’s relative and absolute position, vi,j is its speed, vi,j,rush is the ball’s additional rushing speed(when a ball splits to 2 smaller balls, these 2 balls will get additional speed and it’s called vi,j,rush, otherwise vi,j,rush = 0), ri,j(1D, real) is its radius, di,j is the distance between the ball and the center of the agent, ei,j,max, ei,j,min(1D, binary) are whether the ball can be eaten by the maximal or minimal balls of the observing agent, si,j,rem(1D, binary) is whether the ball is able to remerge at present. ti,j(3D, one hot) is the type of the ball.
The script-base agent can automatically chase after and split towards other smaller agents. When facing extreme danger (we define "extreme danger" as larger learn-based agents being very close to it), it will use a 3-step deep-first-search to plan a best way for escape. More details of the script can be seen in our code. We played against the script-base agent using human intelligence for many times and we could never hunt it when having only one ball and rarely catch it by split.
C TRAINING DETAILS
C.1 GRIDWORLD GAMES
In Monster-Hunt and Escalation, agents’ networks are organized by actor-critic (policy-value) architecture. We consider N = 2 agents with a policy profile π = {π0, π1} parameterized by θ = {θ0, θ1}. The policy network πi takes observation oi as input, two hidden layers with 64 units are followed after that, and then outputs action ai. While the value network takes as input observations of two agents, o = {o0, o1} and outputs the V-value of agent i, similarly two hidden layers with 64 units are added before the output.
In Escalation, we also place an additional GRU module before the output in policy network and value network respectively, to infer opponent’s intentions from historical information. Note that 64-dimensional hidden state of GRU h will change if the policy network is updated. In order to both keep forward information and use backward information to compute generalized advantage estimate (GAE) with enough trajectories, we split buffer data into small chunks, e.g., 10 consecutive timesteps as a small data chunk. The initial hidden state hinit, which is the first hidden state h0, is kept for each data chunk, but do another forward pass to re-compute {h1, ..., hM−1}, where M represents the length of one data chunk, and keep buffer-reuse low, e.g., 4 in practice.
Agents in Monster-Hunt and Escalation are trained by PPO with independent parameters. Adam optimizer is used to update network parameters and each experiment is executed for 3 times with random seeds. More optimization hyper-parameter settings are in Tab.6. In addition, Monster-Hunt also utilizes GRU modules to infer opponent’s identity during adaption training and the parallel threads are set to 64.
Count-based exploration: We just add the count-based exploration intrinsic reward rint to the environment reward during training. when the agent’s observation is o, rint = α/no where α is a hyperparameter adjusted properly (0.3 in Monster-Hunt and 1 in Escalation) and no is the number of times the agent have the observation o.
DIAYN: In Monster-Hunt, we use DIAYN to train 10 diverse policy in the first 140k episodes (DIAYN’s discriminator has 3 FC layers with 256, 128, 10 units respectively) and choose the policy which has the best performance in Monster-Hunt’s reward settings to fine-tune in the next 280k episodes. Note that DIAYN doesn’t have a warm-start phase before fine-tuning in its original paper so we didn’t do so as well. Note that in the first unsupervised learning phase, DIAYN does not optimize for any specific reward function. Hence, we did not plot the reward curve for DIAYN in Fig.7 for this phase. Instead, we simply put a dashed line showing the reward of the best selected pair of policies from DIAYN pretraining.
MAVEN: We use the open-sourced implementation of MAVEN from https://github.com/ AnujMahajanOxf/MAVEN.
Population-based training: In each PBT trial, we straightforward train the same amount of parallel PG policies as RPG with different random seeds in each problem respectively and choose the one with best performance as the final policy. Note that the final training curve is averaged over 3 PBT trials.
C.2 Agar.io
In Agar.io, we used PPO as our algorithm and agents’ networks were also organized by actor-critic (policy-value) architecture with a GRU unit (i.e., PPO-GRU). We consider N = 2 agents with a policy profile π = {π0, π1} sharing parameter θ. The policy network πi takes observation oi as input. At the beginning, like (Baker et al., 2019), oi,balls is separated to 3 groups according to balls’ types: oi,ownballs, oi,scriptballs and oi,otherballs. 3 different multi-head attention models with 4 heads and 64 units for transformation of keys, inquiries and values are used to embed information of 3 types of balls respectively, taking corresponding part of oi,balls as values and inquiries and oi,global as keys. Then their outputs are concatenated and transformed by an FC layer with 128 units before being sent to a GRU block with 128 units. After that, the hidden state is copied to 2 heads for policy’s and value’s output. The policy head starts with 2 FC layers both with 128 units and ends with 2 heads to generate discrete(split or no_split) and continuous(target) actions. The value head has 3 FC layers with 128, 128, 1 unit respectively and outputs a real number.
PPO-GRU was trained with 128 parallel environment threads. Agar.io’s episode length was uniformrandomly sampled between 300 and 400 both when training and evaluating. Buffer data were split to small chunks with length = 32 in order to diversify training data and stabilize training process. and the buffer was reused for 4 times to increase data efficiency. Hidden states of each chunk except at the beginning were re-computed after each reuse to sustain PPO’s "on-policy" property as much as possible. Action was repeated for 5 times in the environment whenever the policy was executed and only the observation after the last action repeat was sent to the policy. Each training process started with a curriculum-learning in the first 1.5e7 steps: Speed of script agents was multiplied with x, where x is uniformly random-sampled between max{0, (n− 1e7)/5e6} and min{1,max{0, (n−5e6)/5e6}} at the beginning of each episode, where n was the steps of training. After the curriculum learning, Speed was fixed to the standard. Each experiment was executed for 3 times with different random seeds. Adam optimizer was used to update network parameters. More optimization hyper-parameter settings are in Tab.7.
D ADDITIONAL EXPERIMENT RESULTS
D.1 Monster-Hunt
In Monster-Hunt, we set Cmax = 5 for sampling w. Fig. 13 illustrates the policies discovered by several selected w values, where different strategic modalities can be clearly observed: e.g., with w = [0, 5, 0], agents always avoid monsters and only eat apples. In Fig. 14, it’s worth noting that w = [5, 0, 2] could yield the best policy profile (i.e., two agents move together to hunt the monster.)
and doesn’t even require further fine-tuning with some seeds. But the performance of w = [5, 0, 2] is significantly unstable and it may converge to another NE (i.e., two agents move to a corner and wait for the monster.) with other seeds. So w = [5, 0, 5], which yields stable strong cooperation strategies with different seeds, will be chosen in RR phase when w = [5, 0, 2] performs poorly. We demonstrate the obtained rewards from different policies in Fig. 14, where the policies learned by RPG produces the highest rewards.
D.2 Agar.io
D.2.1 STANDARD SETTING
We sampled 4 differentw and they varied in different degrees of cooperation. We also did experiments using only baseline PG or PG with intrinsic reward generated by Random Network distillation (RND) to compare with RPG. RR lasted for 40M steps, but only the best reward parameter in RR (w = [1, 1]) was warmed up for 3M steps and fine-tuned for 17M steps later. PG and RND were also trained for 60M steps in order to compare with RPGfairly. In Fig. 15, we can see that PG and RND produced very low rewards because they all converged to non-cooperative policies. w = [1, 1] produced highest rewards after RR, and rewards boosted higher after fine-tuning.
D.2.2 AGGRESSIVE SETTING
We sampled 5 different w and their behavior were much more various. the other training settings were the same as standard setting. in Fig. 16, we should notice that simply sharing reward (w = [1, 1]) didn’t get very high reward because attacking each other also benefits each other, so 2 agents just learned to sacrifice, Again, Fig. 16 illustrates that rewards of RPG was far ahead the other policies while both PG and PG+RND failed to learn cooperative strategies.
We also listed all results of Standard and Aggressive setting in Tab. 8 for clearer comparison.
D.2.3 UNIVERSAL REWARD-CONDITIONED POLICY
We also tried to train a universal policy conditioned on w by randomly sampling different w at the beginning of each episode during training rather than fixing different w and training the policy later
on. But as Fig. 17 illustrates, the learning process was very unstable and model performed almost the same under different w due to the intrinsic disadvantage of an on-policy algorithm dealing with multi-tasks: the learning algorithm may pay more effort on w where higher rewards are easier to get but ignore the performance on other w, which made it very hard to get diverse behaviors.
D.3 LEARN ADAPTIVE POLICY
In this section, we add the opponents’ identity ψ in the input of the value network to stable the training process and boost the performance of the adaptive agent. ψ is a C-dimensional one-hot vector, where C denotes the number of opponents.
D.3.1 Iterative Stag-Hunt
In Iterative Stag-Hunt, we randomize the payoff matrix, which is a 4-dimensional vector, and set Cmax = 4 for sampling w. The parallel threads are 512 and the episode length is 10. Other training hyper-parameter settings are the same as Tab.6. Fig 18 describes different w = [a, b, c, d] (i.e.,
[4, 0, 0, 0], [0, 0, 0, 4], [0, 4, 4, 0], [4, 1, 4, 0]) yields different policy profiles. e.g., with w = [0, 0, 0, 4], both agents tend to eat the hare. The original game corresponds to w = [4, 3,−50, 1]. Tab. 9 reveals w = [4, 0, 0, 0] yields the highest reward and reaches the optimal NE without further fine-tuning.
Utilizing 4 different strategies obtained in the RR phase as opponents, we could train an adaptive policy which can make proper decisions according to opponent’s identity. Fig. 19 shows the adaption training curve, we can see that the policy yields adaptive actions stably after 5e4 episodes. At the evaluation stage, we introduce 4 hand-designed opponents to test the performance of the adaptive policy, including Stag opponent (i.e., always hunt the stag), Hare opponent (i.e., always eat the hare), Tit-for-Tat (TFT) opponent (i.e., always hunt the stag at the first step, and then take the action executed by the other agent in the last step), and Random opponent (i.e., randomly choose to hunt the stag or eat the hare at each step). Tab. 10 illustrates that the adaptive policy exploits all hand-designed strategies, including Tit-for-Tat opponent, which significantly differ from the trained opponents.
D.3.2 Monster-Hunt
We use the policy population Π2 trained by 4 w values (i.e., w = [5, 1,−5], w = [4, 2,−2],w = [0, 5, 0],w = [5, 0, 5]) in the RR phase as opponents for training the adaptive policy. In addition, we sample other 4 w values (i.e., w = [5, 0, 0], w = [−5, 5,−5],w = [−5, 0, 5],w = [5,−5, 5]) from Cmax = 5 to train new opponents for evaluation. Fig. 20 shows the adaption training curve of the
monster-hunt game, where the adaptive policy could take actions stably according to the opponent’s identity.
D.3.3 Agar.io
In Agar.io, we used 2 types of policies from RR: w = [1, 0] (i.e. cooperative) and w = [0, 1] (i.e. competitive) as opponents, and trained a adaptive policy facing each opponent with probability=50% in standard setting while only its value head could know the opponent’s type directly. Then we supposed the policy could cooperate or compete properly with corresponding opponent. As Fig. 21 illustrates, the adaptive policy learns to cooperate with cooperative partners while avoid being exploited by competitive partners and exploit both partners.
More details about training and evaluating process: Oracle pure-cooperative policies are learned against a competitive policy for 4e7 steps. So do oracle pure-competitive policies. And the adaptive policy is trained for 6e7 steps. the length of each episode is 350 steps (the half is 175 steps). When evaluating, The policy against the opponent was the adaptive policy in first 175 steps whatever we are testing adaptive or oracle policies. When we tested adaptive policies, the policy against the opponent would keep going for another 175 steps while the opponent would changed to another type and its hidden state would be emptied to zero. When we tested oracle policies, the policy against the opponent would turn to corresponding oracle policies and the opponent would also changed its type while their hidden states were both emptied. | 1. What is the main contribution of the paper in multi-agent games?
2. How effective is reward randomization in discovering efficient policies?
3. Are there any concerns regarding the sensitivity of the method?
4. Is there a discussion on why reward randomization works?
5. Can the idea of reward randomization be generalized to other games beyond the ones considered in the paper? | Review | Review
This work proposes and evaluates reward randomization as a strategy to discover efficient policies in multi-agent games. By perturbing the reward structure of a game, the authors show that policy gradient techniques can find better equilibria and lead to complex new behaviors.
Reward randomization is a fairly intuitive technique. In matrix form games, this just involves replacing the game's rewards with those drawn from some distribution. In more complex games, the authors propose distilling the game's reward structure into a small number of components and randomizing the weights placed on those components to compute the total payoff. The authors demonstrate that both theoretically and empirically, Reward-randomized Policy Gradient (RPG) outperforms standard baseline techniques. It also produces a diverse set of candidate policies, which the authors demonstrate can be used to train an adaptive agent.
The paper is clear and well-written. The figures are helpful and detailed. Overall, my impression is positive.
Some analysis of the sensitivity of this method would have helped. Much of the complexity is folded into choosing an appropriate reward distribution, and it would be good to know how dependent the results are on being able to choose a "good" distribution from which to sample.
I would have appreciated more of a discussion about why reward randomization works. Fundamentally, this appears to rely on the idea that while the action space may in general be very large, for most natural games, there are no reasonable reward schemes that incentivize the vast majority of these behaviors. As a result, it is feasible to explore the space of potentially optimal policies simply by exploring the space of reasonable rewards. Can this statement be formalized? And how well does it generalize to games other than the ones considered? |
ICLR | Title
Discovering Diverse Multi-Agent Strategic Behavior via Reward Randomization
Abstract
We propose a simple, general and effective technique, Reward Randomization for discovering diverse strategic policies in complex multi-agent games. Combining reward randomization and policy gradient, we derive a new algorithm, RewardRandomized Policy Gradient (RPG). RPG is able to discover multiple distinctive human-interpretable strategies in challenging temporal trust dilemmas, including grid-world games and a real-world game Agar.io, where multiple equilibria exist but standard multi-agent policy gradient algorithms always converge to a fixed one with a sub-optimal payoff for every player even using state-of-the-art exploration techniques. Furthermore, with the set of diverse strategies from RPG, we can (1) achieve higher payoffs by fine-tuning the best policy from the set; and (2) obtain an adaptive agent by using this set of strategies as its training opponents. The source code and example videos can be found in our website: https://sites.google. com/view/staghuntrpg.
N/A
We propose a simple, general and effective technique, Reward Randomization for discovering diverse strategic policies in complex multi-agent games. Combining reward randomization and policy gradient, we derive a new algorithm, RewardRandomized Policy Gradient (RPG). RPG is able to discover multiple distinctive human-interpretable strategies in challenging temporal trust dilemmas, including grid-world games and a real-world game Agar.io, where multiple equilibria exist but standard multi-agent policy gradient algorithms always converge to a fixed one with a sub-optimal payoff for every player even using state-of-the-art exploration techniques. Furthermore, with the set of diverse strategies from RPG, we can (1) achieve higher payoffs by fine-tuning the best policy from the set; and (2) obtain an adaptive agent by using this set of strategies as its training opponents. The source code and example videos can be found in our website: https://sites.google. com/view/staghuntrpg.
1 INTRODUCTION
Games have been a long-standing benchmark for artificial intelligence, which prompts persistent technical advances towards our ultimate goal of building intelligent agents like humans, from Shannon’s initial interest in Chess (Shannon, 1950) and IBM DeepBlue (Campbell et al., 2002), to the most recent deep reinforcement learning breakthroughs in Go (Silver et al., 2017), Dota II (OpenAI et al., 2019) and Starcraft (Vinyals et al., 2019). Hence, analyzing and understanding the challenges in various games also become critical for developing new learning algorithms for even harder challenges.
Most recent successes in games are based on decentralized multi-agent learning (Brown, 1951; Singh et al., 2000; Lowe et al., 2017; Silver et al., 2018), where agents compete against each other and optimize their own rewards to gradually improve their strategies. In this framework, Nash Equilibrium (NE) (Nash, 1951), where no player could benefit from altering its strategy unilaterally, provides a general solution concept and serves as a goal for policy learning and has attracted increasingly significant interests from AI researchers (Heinrich & Silver, 2016; Lanctot et al., 2017; Foerster et al., 2018; Kamra et al., 2019; Han & Hu, 2019; Bai & Jin, 2020; Perolat et al., 2020): many existing works studied how to design practical multi-agent reinforcement learning (MARL) algorithms that can provably converge to an NE in Markov games, particularly in the zero-sum setting.
Despite the empirical success of these algorithms, a fundamental question remains largely unstudied in the field: even if an MARL algorithm converges to an NE, which equilibrium will it converge to? The existence of multiple NEs is extremely common in many multi-agent games. Discovering as many NE strategies as possible is particularly important in practice not only because different NEs can produce drastically different payoffs but also because when facing unknown players who are trained to play an NE strategy, we can gain advantage by identifying which NE strategy the opponent is playing and choosing the most appropriate response. Unfortunately, in many games where multiple distinct NEs exist, the popular decentralized policy gradient algorithm (PG), which has led to great successes in numerous games including Dota II and Stacraft, always converge to a particular NE with non-optimal payoffs and fail to explore more diverse modes in the strategy space.
Consider an extremely simple example, a 2-by-2 matrix game Stag-Hunt (Rousseau, 1984; Skyrms, 2004), where two pure strategy NEs exist: a “risky” cooperative equilibrium with the highest payoff ∗Equal contribution. † Work done as an intern at Institute for Interdisciplinary Information Sciences (IIIS), Tsinghua University.
for both agents and a “safe” non-cooperative equilibrium with strictly lower payoffs. We show, from both theoretical and practical perspectives, that even in this simple matrix-form game, PG fails to discover the high-payoff “risky” NE with high probability. The intuition is that the neighborhood that makes policies converge to the “risky” NE can be substantially small comparing to the entire policy space. Therefore, an exponentially large number of exploration steps are needed to ensure PG discovers the desired mode. We propose a simple technique, Reward Randomization (RR),
which can help PG discover the “risky” cooperation strategy in the stag-hunt game with theoretical guarantees. The core idea of RR is to directly perturb the reward structure of the multi-agent game of interest, which is typically low-dimensional. RR directly alters the landscape of different strategy modes in the policy space and therefore makes it possible to easily discover novel behavior in the perturbed game
(Fig. 1). We call this new PG variant Reward-Randomized Policy Gradient (RPG).
To further illustrate the effectiveness of RPG, we introduce three Markov games – two gridworld games and a real-world online game Agar.io. All these games have multiple NEs including both “risky” cooperation strategies and “safe” non-cooperative strategies. We empirically show that even with state-of-the-art exploration techniques, PG fails to discover the “risky” cooperation strategies. In contrast, RPG discovers a surprisingly diverse set of human-interpretable strategies in all these games, including some non-trivial emergent behavior. Importantly, among this set are policies achieving much higher payoffs for each player compared to those found by PG. This “diversityseeking” property of RPG also makes it feasible to build adaptive policies: by re-training an RL agent against the diverse opponents discovered by RPG, the agent is able to dynamically alter its strategy between different modes, e.g., either cooperate or compete, w.r.t. its test-time opponent’s behavior.
We summarize our contributions as follow
• We studied a collection of challenging multi-agent games, where the popular multi-agent PG algorithm always converges to a sub-optimal equilibrium strategy with low payoffs.
• A novel reward-space exploration technique, reward randomization (RR), for discovering hard-to-find equilibrium with high payoffs. Both theoretical and empirical results show that reward randomization substantially outperforms classical policy/action-space exploration techniques in challenging trust dilemmas.
• We empirically show that RR discovers surprisingly diverse strategic behaviors in complex Markov games, which further provides a practical solution for building an adaptive agent.
• A new multi-agent environment Agar.io, which allows complex multi-agent strategic behavior. We released the environment to the community as a novel testbed for MARL research.
2 A MOTIVATING EXAMPLE: STAG HUNT
Stag Hare Stag a, a c, b Hare b, c d, d
Table 1: The stag-hunt game, a > b ≥ d > c.
We start by analyzing a simple problem: finding the NE with the optimal payoffs in the Stag Hunt game. This game was originally introduced in Rousseau’s work, “A discourse on inequality” (Rousseau, 1984): a group of hunters are tracking a big stag silently; now a hare shows up, each hunter should decide whether to keep tracking the stag or kill the hare immediately. This leads to the 2-by-2 matrix-form stag-hunt game in Tab. 1
with two actions for each agent, Stag (S) and Hare (H). There are two pure strategy NEs: the Stag NE, where both agents choose S and receive a high payoff a (e.g., a = 4), and the Hare NE, where both agents choose H and receive a lower payoff d (e.g., d = 1). The Stag NE is “risky” because if one agent defects, they still receives a decent reward b (e.g., b = 3) for eating the hare alone while the other agent with an S action may suffer from a big loss c for being hungry (e.g., c = −10). Formally, let A = {S,H} denote the action space, πi(θi) denote the policy for agent i (i ∈ {1, 2}) parameterized by θi, i.e., P [πi(θi) = S] = θi and P [πi(θi) = H] = 1− θi, and R(a1, a2; i) denote the payoff for agent i when agent 1 takes action a1 and agent 2 takes action a2. Each agent i optimizes its expected utility Ui(π1, π2) = Ea1∼π1,a2∼π2 [R(a1, a2; i)]. Using the standard policy gradient algorithm, a typical learning procedure is to repeatedly take the following two steps until
convergence1: (1) estimate gradient ∇i = ∇Ui(π1, π2) via self-play; (2) update the policies by θi ← θi + α∇i with learning rate α. Although PG is widely used in practice, the following theorem shows in certain scenarios, unfortunately, the probability that PG converges to the Stag NE is low.
Theorem 1. Suppose a− b = (d− c) for some 0 < < 1 and initialize θ1, θ2 ∼ Unif [0, 1]. Then the probability that PG discovers the high-payoff NE is upper bounded by 2 + 2
1+2 + 2 .
Theorem 1 shows when the risk is high (i.e., c is low), then the probability of finding the Stag NE via PG is very low. Note this theorem applies to random initialization, which is standard in RL.
Remark: One needs at least N = Ω ( 1 ) restarts to ensure a constant success probability.
Fig. 2 shows empirical studies: we select 4 value assignments, i.e., c ∈ {−5,−20,−50,−100} and a=4, b=3, d=1, and run a state-of-the-art PG method, proximal policy optimization (PPO) (Schulman et al., 2017), on these games. The Stag NE is rarely reached, and, as c becomes smaller, the probability of finding the Stag NE significantly decreases. Peysakhovich & Lerer (2018b) provided a theorem of similar flavor without analyzing the dynamics of the learning algorithm whereas we explicitly characterize the behavior of PG. They studied a prosocial reward-sharing scheme, which transforms the reward of both agents toR(a1, a2; 1)+R(a1, a2; 2). Reward sharing can be viewed as a special case of our method and, as shown in Sec. 5, it is insufficient for solving complex temporal games.
2.1 REWARD RANDOMIZATION IN THE MATRIX-FORM STAG-HUNT GAME
9 Thm. 1 suggests that the utility function R highly influences what strategy PG might learn. Taking one step further, even if a strategy is difficult to learn with a particular R, it might be easier in some other function R′. Hence, if we can define an appropriate spaceR over different utility functions and draw samples from R, we may possibly discover desired novel strategies by running PG on some sampled utility function R′ and evaluating the obtained policy profile on the original game with R. We call this procedure Reward Randomization (RR).
Concretely, in the stag-hunt game, R is parameterized by 4 variables (aR, bR, cR, dR). We can define a distribution over R4, draw a tuple R′ = (aR′ , bR′ , cR′ , dR′) from this distribution, and run PG on R′. Denote the original stag-hunt game where the Stag NE is hard to discover as R0. Reward randomization draws N perturbed tuples R1, . . . , RN , runs PG on each Ri, and evaluates each of the obtained strategies on R0. The theorem below shows it is highly likely that the population of the N policy profiles obtained from the perturbed games contains the Stag NE strategy.
Theorem 2. For any Stag-Hunt game, suppose in the i-th run of RR we randomly generate aRi , bRi , cRi , dRi ∼ Unif [−1, 1] and initialize θ1, θ2 ∼ Unif [0, 1], then with probability at least 1− 0.6N = 1− exp (−Ω (N)), the aforementioned RR procedure discovers the high-payoff NE.
Here we use the uniform distribution as an example. Other distributions may also help in practice. Comparing Thm. 2 and Thm. 1, RR significantly improves standard PG w.r.t. success probability.
Remark 1: For the scenario studied in Thm. 1, to achieve a (1− δ) success probability for some 0 < δ < 1, PG requires at least N = Ω ( 1 log ( 1 δ )) random restarts. For the same scenario, RR only requires to repeat at most N = O (log (1/δ)) which is independent of . When is small, this is a huge improvement.
Remark 2: Thm. 2 suggests that comparing with policy randomization, perturbing the payoff matrix makes it substantially easier to discover a strategy that can be hardly reached in the original game.
Note that although in Stag Hunt, we particularly focus on the Stag NE that has the highest payoff for both agents, in general RR can also be applied to NE selection in other matrix-form games using a payoff evaluation functionE(π1, π2). For example, we can setE(π1, π2) = U1(π1, π2)+U2(π1, π2) for a prosocial NE, or look for Pareto-optimal NEs by setting E(π1, π2) = βU1(π1, π2) + (1 − β)U2(π1, π2) with 0 ≤ β ≤ 1.
1In general matrix games beyond stag hunt, the procedure can be cyclic as well (Singh et al., 2000).
Algorithm 1: RPG: Reward-Randomized Policy Gradient Input: original game M , search spaceR, evaluation function E, population size N ; draw samples {R(1), . . . , R(N)} fromR; {π(i)1 , π (i) 2 } ← PG on induced games {M(R(i))}i in parallel ; // RR phase select the best candidate π(k)1 , π (k) 2 by k = arg maxiE(π (i) 1 , π (i) 2 ) ; // evaluation phase π?1 , π ? 2 ← fine-tune π (k) 1 , π (k) 2 on M via PG (if necessary) ; // fine-tuning phase return π?1 , π?2 ;
3 RPG: REWARD-RANDOMIZED POLICY GRADIENT
Herein, we extend Reward Randomization to general multi-agent Markov games. We now utilize RL terminologies and consider the 2-player setting for simplicity. Extension to more agents is straightforward (Appx. B.3).
Consider a 2-agent Markov game M defined by (S,O,A, R, P ), where S is the state space; O = {oi : s ∈ S, oi = O(s, i), i ∈ {1, 2}} is the observation space, where agent i receives its own observation oi = O(s; i) (in the fully observable setting, O(s, i) = s); A is the action space for each agent; R(s, a1, a2; i) is the reward function for agent i; and P (s′|s, a1, a2) is transition probability from state s to state s′ when agent i takes action ai. Each agent has a policy πi(oi; θi) which produces a (stochastic) action and is parameterized by θi. In the decentralized RL framework, each agent i optimizes its expected accumulative reward Ui(θi) = Ea1∼π1,a2∼π2 [ ∑ t γ tR(st, at1, a t 2; i)] with some discounted factor γ.
Consider we run decentralized RL on a particular a Markov game M and the derived policy profile is (π1(θ1), π2(θ2)). The desired result is that the expected reward Ui(θi) for each agent i is maximized. We formally written this equilibrium evaluation objective as an evaluation function E(π1, π2) and therefore the goal is to find the optimal policy profile (π?1 , π ? 2) w.r.t. E. Particularly for the games we considered in this paper, since every (approximate) equilibrium we ever discovered has a symmetric payoff, we focus on the empirical performance while assume a much simplified equilibrium selection problem here: it is equivalent to define E(π1, π2) by E(π1, π2) = βU1(θ1) + (1− β)U2(θ2) for any 0 ≤ β ≤ 1. Further discussions on the general equilibrium selection problem can be found in Sec. 6. The challenge is that although running decentralized PG is a popular learning approach for complex Markov games, the derived policy profile (π1, π2) is often sub-optimal, i.e., there exists (π?1 , π ? 2) such that E(π?1 , π ? 2) > E(π1, π2). It will be shown in Sec. 5 that even using state-of-the-art exploration techniques, the optimal policies (π?1 , π ? 2) can be hardly achieved.
Following the insights from Sec. 2, reward randomization can be applied to a Markov game M similarly: if the reward function in M poses difficulties for PG to discover some particular strategy, it might be easier to reach this desired strategy with a perturbed reward function. Hence, we can then define a reward function spaceR, train a population of policy profiles in parallel with sampled reward functions from R and select the desired strategy by evaluating the obtained policy profiles in the original game M . Formally, instead of purely learning in the original game M = (S,O,A, R, P ), we define a proper subspace R over possible reward functions R : S × A × A → R and use M(R′) = (S,O,A, R′, P ) to denote the induced Markov game by replacing the original reward functionRwith anotherR′ ∈ R. To apply reward randomization, we drawN samplesR(1), . . . , R(N) from R, run PG to learn (π(i)1 , π (i) 2 ) on each induced game M(R
(i)), and pick the desired policy profile (π(k)1 , π (k) 2 ) by calculating E in the original game M . Lastly, we can fine-tune the policies π (k) 1 , π (k) 2 in M to further boost the practical performance (see discussion below). We call this learning procedure, Reward-Randomized Policy Gradient (RPG), which is summarized in Algo. 1.
Reward-function space: In general, the possible space for a valid reward function is intractably huge. However, in practice, almost all the games designed by human have low-dimensional reward structures based on objects or events, so that we can (almost) always formulate the reward function in a linear form R(s, a1, a2; i) = φ(s, a1, a2; i)Tw where φ(s, a1, a2; i) is a low-dimensional feature vector and w is some weight.
A simple and general design principle for R is to fix the feature vector φ while only randomize the weight w, i.e., R = {Rw : Rw(s, a1, a2; i) = φ(s, a1, a2; i)Tw, ‖w‖∞ ≤ Cmax}. Hence, the overall search space remains a similar structure as the original game M but contains a diverse range of preferences over different feature dimensions. Notably, since the optimal strategy is invariant to the scale of the reward function R, theoretically any Cmax > 0 results in the same search space.
However, in practice, the scale of reward may significantly influence MARL training stability, so we typically ensure the chosen Cmax to be compatible with the PG algorithm in use.
Note that a feature-based reward function is a standard assumption in the literature of inverse RL (Ng et al., 2000; Ziebart et al., 2008; Hadfield-Menell et al., 2017). In addition, such a reward structure is also common in many popular RL application domains. For example, in navigation games (Mirowski et al., 2016; Lowe et al., 2017; Wu et al., 2018), the reward is typically set to the negative distance from the target location LT to the agent’s location LA plus a success bonus, so the feature vector φ(s, a) can be written as a 2-dimensional vector [‖LT − LA‖2, I(LT = LA)]; in real-time strategy games (Wu & Tian, 2016; Vinyals et al., 2017; OpenAI et al., 2019), φ is typically related to the bonus points for destroying each type of units; in robotics manipulation (Levine et al., 2016; Li et al., 2020; Yu et al., 2019), φ is often about the distance between the robot/object and its target position; in general multi-agent games (Lowe et al., 2017; Leibo et al., 2017; Baker et al., 2020), φ could contain each agent’s individual reward as well as the joint reward over each team, which also enables the representation of different prosociality levels for the agents by varying the weight w.
Fine tuning: There are two benefits: (1) the policies found in the perturbed game may not remain an equilibrium in the original game, so fine-tuning ensures convergence; (2) in practice, fine-tuning could further help escape a suboptimal mode via the noise in PG (Ge et al., 2015; Kleinberg et al., 2018). We remark that a practical issue for fine-tuning is that when the PG algorithm adopts the actor-critic framework (e.g., PPO), we need an additional critic warm-start phase, which only trains the value function while keeps the policy unchanged, before the fine-tuning phase starts. This warm-start phase significantly stabilizes policy learning by ensuring the value function is fully functional for variance reduction w.r.t. the reward function R in the original game M when estimating policy gradients.
3.1 LEARNING TO ADAPT WITH DIVERSE OPPONENTS
Algorithm 2: Learning to Adapt Input: game M , policy set Π2, initial πa1 ; repeat
draw a policy π′2 from Π2; evaluate πa1 and π′2 on M and collect data; update θa via PG if enough data collected;
until enough iterations; return πa1 (θa);
In addition to the final policies π?1 , π ? 2 , another benefit from RPG is that the population of N policy profiles contains diverse strategies (more in Sec. 5). With a diverse set of strategies, we can build an adaptive agent by training with a random opponent policy sampled from the set per episode, so that the agent is forced to behave differently based on its opponent’s behavior. For simplicity, we consider learning an adaptive policy πa1 (θ
a) for agent 1. The procedure remains the same for agent 2. Suppose a policy population P = {π(1)2 , . . . , π (N) 2 } is obtained during the RR phase, we first construct a diverse strategy set Π2 ⊆ P that contains all the discovered behaviors from P . Then we construct a mixed strategy by randomly sampling a policy π′2 from Π2 in every training episode and run PG to learn πa1 by competing against this constructed mixed strategy. The procedure is summarized in Algo. 2. Note that setting Π2 = P appears to be a simple and natural choice. However, in practice, since P typically contains just a few strategic behaviors, it is unnecessary for Π2 to include every individual policy from P . Instead, it is sufficient to simply ensure Π2 contains at least one policy from each equilibrium in P (more details in Sec. 5.3). Additionally, this method does not apply to the one-shot game setting (i.e., horizon is 1) because the adaptive agent does not have any prior knowledge about its opponent’s identity before the game starts.
Implementation: We train an RNN policy for πa1 (θa). It is critical that the policy input does not directly reveal the opponent’s identity, so that it is forced to identify the opponent strategy through what it has observed. On the contrary, when adopting an actor-critic PG framework (Lowe et al., 2017), it is extremely beneficial to include the identity information in the critic input, which makes critic learning substantially easier and significantly stabilizes training. We also utilize a multi-head architecture adapted from the multi-task learning literature (Yu et al., 2019), i.e., use a separate value head for each training opponent, which empirically results in the best training performance.
4 TESTBEDS FOR RPG: TEMPORAL TRUST DILEMMAS
We introduce three 2-player Markov games as testbeds for RPG. All these games have a diverse range of NE strategies including both “risky” cooperative NEs with high payoffs but hard to discover and “safe” non-cooperative NEs with lower payoffs. We call them temporal trust dilemmas. Game descriptions are in a high level to highlight the game dynamics. More details are in Sec. 5 and App. B.
Gridworlds: We consider two games adapted from Peysakhovich & Lerer (2018b), Monster-Hunt (Fig. 3) and Escalation (Fig. 4). Both games have a 5-by-5 grid and symmetric rewards.
Monster-Hunt contains a monster and two apples. Apples are static while the monster keeps moving towards its closest agent. If a single agent meets the monster, it loses a penalty of 2; if two agents catch the monster together, they both earn a bonus of 5. Eating an apple always raises a bonus of 2. Whenever an apple is eaten or the monster meets an agent, the entity will respawn randomly. The optimal payoff can only be achieved when both agents precisely catch the monster simultaneously.
Escalation contains a lit grid. When two agents both step on the lit grid, they both get a bonus of 1 and a neighboring grid will be lit up in the next timestep. If only one agent steps on the lit grid, it gets a penalty of 0.9L, where L denotes the consecutive cooperation steps until that timestep, and the lit grid will respawn randomly. Agents need to stay together on the lit grid to achieve the maximum payoff despite of the growing penalty. There are multiple NEs: for each L, that both agents cooperate for L steps and then leave the lit grid jointly forms an NE.
Agar.io is a popular multiplayer online game. Players control cells in a Petri dish to gain as much mass as possible by eating smaller cells while avoiding being eaten by larger ones. Larger cells move slower. Each player starts with one cell but can split a sufficiently large cell into two, allowing them to control multiple cells (Wikipedia, 2020). We consider a simplified scenario (Fig. 5) with 2 players (agents) and tiny script cells, which automatically runs away when an agent comes by. There is a low-risk non-cooperative strategy, i.e., two agents stay away from each other and hunt script cells independently. Since the script cells move faster, it is challenging for a single agent to hunt them. By contrast, two agents can cooperate to encircle the script cells to accelerate hunting. However, cooperation is extremely risky for the agent with less mass: two agents need to stay close to cooperate but the larger agent may defect by eating the smaller one and gaining an immediate big bonus.
5 EXPERIMENT RESULTS
In this section, we present empirical results showing that in all the introduced testbeds, including the real-world game Agar.io, RPG always discovers diverse strategic behaviors and achieves an equilibrium with substantially higher rewards than standard multi-agent PG methods. We use PPO (Schulman et al., 2017) for PG training. Training episodes for RPG are accumulated over all the perturbed games. Evaluation results are averaged over 100 episodes in gridworlds and 1000 episodes in Agar.io. We repeat all the experiments with 3 seeds and use X (Y ) to denote mean X with standard deviation Y in all tables. Since all our discovered (approximate) NEs are symmetric for both players, we simply take E(π1, π2) = U1(π1, π2) as our evaluation function and only measure the reward of agent 1 in all experiments for simplicity. More details can be found in appendix.
5.1 GRIDWORLD GAMES
Monster-Hunt: Each agent’s reward is determined by three features per timestep: (1) whether two agents catch the monster together; (2) whether the agent steps on an apple; (3) whether the agent meets the monster alone. Hence, we write φ(s, a1, a2; i) as a 3-dimensional 0/1 vector with one dimension for one feature. The original game corresponds to w = [5, 2,−2]. We set Cmax = 5 for sampling w. We compare RPG with a collection of baselines, including standard PG (PG), PG with shared reward (PG+SR), population-based training (PBT), which trains the same amount of parallel PG policies as RPG, as
well as popular exploration methods, i.e., count-based exploration (PG+CNT) (Tang et al., 2017) and MAVEN (Mahajan et al., 2019). We also consider an additional baseline, DIAYN (Eysenbach et al., 2019), which discovers diverse skills using a trajectory-based diversity reward. For a fair comparison, we use DIAYN to first pretrain diverse policies (conceptually similar to the RR phase), then evaluate the rewards for every pair of obtained policies to select the best policy pair (i.e., evaluation phase, shown with the dashed line in Fig. 6), and finally fine-tune the selected policies until convergence (i.e., fine-tuning phase). The results of RPG and the 6 baselines are summarized in Fig. 6, where RPG consistently discovers a strategy with a significantly higher payoff. Note that the strategy with the optimal payoff may not always directly emerge in the RR phase, and there is neither a particular value of w constantly being the best candidate: e.g., in the RR phase, w = [5, 0, 2] frequently produces a sub-optimal cooperative strategy (Fig. 7(a)) with a reward lower than other w values, but it can also occasionally lead to the optimal strategy (Fig. 7(b)). Whereas, with the fine-tuning phase, the overall procedure of RPG always produces the optimal solution. We visualize both two emergent cooperative strategies in Fig. 7: in the sub-optimal one (Fig. 7(a)), two agents simply move to grid (1,1) together, stay still and wait for the monster, while in the optimal one (Fig. 7(b)), two agents meet each other first and then actively move towards the monster jointly, which further improves hunting efficiency.
Escalation: We can represent φ(s, a1, a2; i) as 2-dimensional vector containing (1) whether two agents are both in the lit grid and (2) the total consecutive cooperation steps. The original game corresponds to w = [1,−0.9]. We set Cmax = 5 and show the total number of cooperation steps per episode for several selected w values throughout training in Fig. 8, where RR is able to discover different NE strategies. Note that w = [1, 0] has already produced the strategy with the optimal payoff in this game, so the fine-tuning phase is no longer needed.
5.2 2-PLAYER GAMES IN Agar.io
There are two different settings of Agar.io: (1) the standard setting, i.e., an agent gets a penalty of −x for losing a mass x, and (2) the more challenging aggressive setting, i.e., no penalty for mass loss. Note in both settings: (1) when an agent eats a mass x, it always gets a bonus of x; (2) if an agent loses all the mass, it immediately dies while the other agent can still play in the game. The aggressive setting promotes agent interactions and typically leads to more diverse strategies in practice. Since both settings strictly define the penalty function for mass loss, we do not randomize this reward term. Instead, we consider two other factors: (1) the bonus for eating the other agent; (2) the prosocial level of both agents. We use a 2-dimensional vector w = [w0, w1], where 0 ≤ w0, w1 ≤ 1, to denote a particular reward function such that (1) when eating a cell of mass x from the other agent, the bonus is w0 × x, and (2) the final reward is a linear interpolation between R(·; i) and 0.5(R(·; 0) +R(·; 1)) w.r.t. w1, i.e., when w1 = 0, each agent optimizes its individual reward while when w1 = 1, two agents have a shared reward. The original game in both Agar.io settings corresponds to w = [1, 0].
Standard setting: PG in the original game (w = [1, 0]) leads to a typical trust-dilemma dynamics: the two agents first learn to hunt and occasionally Cooperate (Fig. 9(a)), i.e., eat a script cell with the other agent close by; then accidentally one agent Attacks the other agent (Fig. 9(b)), which yields a big
PBT w=[0.5, 1] w=[0, 1] w=[0, 0] RPG RND
Rew. 3.3(0.2) 4.8(0.6) 5.1(0.4) 6.0(0.5) 8.9(0.3) 3.2(0.2) #Attack 0.4(0.0) 0.7(0.2) 0.3(0.1) 0.5(0.1) 0.9(0.1) 0.4(0.0) #Coop. 0.0(0.0) 0.6(0.6) 2.3(0.3) 1.6(0.1) 2.0(0.2) 0.0(0.0) #Hunt 0.7(0.1) 0.6(0.3) 0.3(0.0) 0.7(0.0) 0.9(0.1) 0.7(0.0)
Table 3: Results in the aggressive setting of Agar.io: PBT: population training of parallel PG policies; RR: w=[0, 0] is the best candidate via RR; RPG: fine-tuned policy; RND: PG with RND bonus.
immediate bonus and makes the policy aggressive; finally policies converge to the non-cooperative equilibrium where both agents keep apart and hunt alone. The quantitative results are shown in Tab. 2. Baselines include population-based training (PBT) and a state-the-art exploration method for high-dimensional state, Random Network Distillation (RND) (Burda et al., 2019). RND and PBT occasionally learns cooperative strategies while RR stably discovers a cooperative equilibrium with w = [1, 1], and the full RPG further improves the rewards. Interestingly, the best strategy obtained in the RR phase even has a higher Cooperate frequency than the full RPG: fine-tuning transforms the strong cooperative strategy to a more efficient strategy, which has a better balance between Cooperate and selfish Hunt and produces a higher average reward.
Aggressive setting: Similarly, we apply RPG in the aggressive setting and show results in Tab. 3. Neither PBT nor RND was able to find any cooperative strategies in the aggressive game while RPG stably discovers a cooperative equilibrium with a significantly higher reward. We also observe a diverse set of complex strategies in addition to normal Cooperate and Attack. Fig. 10 visualizes the Sacrifice strategy derived with w = [1, 1]: the smaller agent rarely hunts script cells; instead, it waits in the corner for being eaten by the larger agent to contribute all its mass to its partner. Fig. 11 shows another surprisingly novel emergent strategy by w = [0.5, 1]: each agent first hunts individually to gain enough mass; then one agent splits into smaller cells while the other agent carefully eats a portion of the split agent; later on, when the agent who previously lost mass gains sufficient mass, the larger agent similarly splits itself to contribute to the other one, which completes the (ideally) never-ending loop of partial sacrifice. We name this strategy Perpetual for its conceptual similarity to the perpetual motion machine. Lastly, the best strategy is produced by w = [0, 0] with a balance between Cooperate and Perpetual: they cooperate to hunt script cells to gain mass efficiently and quickly perform mutual sacrifice as long as their mass is sufficiently large for split-and-eat. Hence, although the RPG policy has relatively lower Cooperate frequency than the policy by w = [0, 1], it yields a significantly higher reward thanks to a much higher Attack (i.e., Sacrifice) frequency.
5.3 LEARNING ADAPTIVE POLICIES
Monster-Hunt: We select policies trained by 8 differentw values in the RR phase and use half of them for training the adaptive policy and the remaining half as hidden opponents for evaluation. We also make sure that both training and evaluation policies cover the following 4 strategy modes: (1) M(onster): the agent always moves towards the monster; (2) M(onster)-Alone: the agent moves towards the monster but
also tries to keeps apart from the other agent; (3) M(onster)-Coop.: the agent seeks to hunt the monster together with the other agent; (4) Apple: the agent only eats apple. The evaluation results are shown in Tab. 4, where the adaptive policy successfully exploits all the test-time opponents, including M(onster)-Alone, which was trained to actively avoids the other agent.
Agent Adapt. Coop. Comp. Opponent: Cooperative −→ Competitive #Attack 0.2(0.0) 0.3(0.0) 0.1(0.1)
Rew. 0.7(0.7) -0.2(0.6) 0.8(0.5) Opponent: Competitive −→ Cooperative #Coop. 1.0(0.3) 1.4(0.4) 0.3(0.4) Rew. 2.5(0.7) 3.6(1.2) 1.1(0.7)
Table 5: Adaptation test in Agar.io. Opponent type is switched half-way per episode. #Attack, #Coop.: episode statistics; Rew.: agent reward. Adaptive agents’ rewards are close to oracles.
Agar.io: We show the trained agent can choose to cooperate or compete adaptively in the standard setting. We pick 2 cooperative policies (i.e., Cooperate preferred, w=[1, 0]) and 2 competitive policies (i.e., Attack preferred, w=[1, 1]) and use half of them for training and the other half for testing. For a hard challenge at test time, we switch the opponent within an episode, i.e., we use a cooperative opponent in the first half and then immediately switch to a competitive one, and vice versa. So, a desired policy should adapt quickly at halftime. Tab. 5 compares the second-half behavior of the adaptive agent with the oracle pure-competitive/cooperative agents. The rewards of the adaptive agent is close to the oracle: even with half-way switches, the trained policy is
able to exploit the cooperative opponent while avoid being exploited by the competitive one.
6 RELATED WORK AND DISCUSSIONS
Our core idea is reward perturbation. In game theory, this is aligned with the quantal response equilibrium (McKelvey & Palfrey, 1995), a smoothed version of NE obtained when payoffs are perturbed by a Gumbel noise. In RL, reward shaping is popular for learning desired behavior in various domains (Ng et al., 1999; Babes et al., 2008; Devlin & Kudenko, 2011), which inspires our idea for finding diverse strategic behavior. By contrast, state-space exploration methods (Pathak et al., 2017; Burda et al., 2019; Eysenbach et al., 2019; Sharma et al., 2020) only learn low-level primitives without strategy-level diversity (Baker et al., 2020).
RR trains a set of policies, which is aligned with the population-based training in MARL (Jaderberg et al., 2017; 2019; Vinyals et al., 2019; Long et al., 2020; Forestier et al., 2017). RR is conceptually related to domain randomization (Tobin et al., 2017) with the difference that we train separate policies instead of a single universal one, which suffers from mode collapse (see appendix D.2.3). RPG is also inspired by the map-elite algorithm (Cully et al., 2015) from evolutionary learning community, which optimizes multiple objectives simultaneously for sufficiently diverse polices. Our work is also related to Forestier et al. (2017), which learns a set of policies w.r.t. different fitness functions in the singleagent setting. However, they only consider a restricted fitness function class, i.e., the distance to each object in the environment, which can be viewed as a special case of our setting. Besides, RPG helps train adaptive policies against a set of opponents, which is related to Bayesian games (Dekel et al., 2004; Hartline et al., 2015). In RL, there are works on learning when to cooperate/compete (Littman, 2001; Peysakhovich & Lerer, 2018a; Kleiman-Weiner et al., 2016; Woodward et al., 2019; McKee et al., 2020), which is a special case of ours, or learning robust policies (Li et al., 2019; Shen & How, 2019; Hu et al., 2020), which complements our method.
Although we choose decentralized PG in this paper, RR can be combined with any other multi-agent learning algorithms for games, such as fictitious play (Robinson, 1951; Monderer & Shapley, 1996; Heinrich & Silver, 2016; Kamra et al., 2019; Han & Hu, 2019), double-oracle (McMahan et al., 2003; Lanctot et al., 2017; Wang et al., 2019; Balduzzi et al., 2019) and regularized self-play (Foerster et al., 2018; Perolat et al., 2020; Bai & Jin, 2020). Many of these works have theoretical guarantees to find an (approximate) NE but there is little work focusing on which NE strategy these algorithms can converge to when multiple NEs exist, e.g., the stag-hunt game and its variants, for which many learning dynamics fail to converge to a prevalence of the pure strategy Stag (Kandori et al., 1993; Ellison, 1993; Fang et al., 2002; Skyrms & Pemantle, 2009; Golman & Page, 2010)..
In this paper, we primarily focus on how reward randomization empirically helps MARL discover better strategies in practice and therefore only consider stag hunt as a particularly challenging example where an “optimal” NE with a high payoff for every agent exists. In general cases, we can select a desired strategy w.r.t. an evaluation function. This is related to the problem of equilibrium refinement (or equilibrium selection) (Selten, 1965; 1975; Myerson, 1978), which aims to find a subset of equilibria satisfying desirable properties, e.g., admissibility (Banks & Sobel, 1987), subgame perfection (Selten, 1965), Pareto efficiency (Bernheim et al., 1987) or robustness against opponent’s deviation from best response in security-related applications (Fang et al., 2013; An et al., 2011).
ACKNOWLEDGMENTS
This work is supported by National Key R&D Program of China (2018YFB0105000). Co-author Fang is supported, in part, by a research grant from Lockheed Martin. Co-author Wang is supported, in part, by gifts from Qualcomm and TuSimple. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the funding agencies. The authors would like to thank Zhuo Jiang and Jiayu Chen for their support and input during this project. Finally, we particularly thank Bowen Baker for initial discussions and suggesting the Stag Hunt game as our research testbed, which eventually leads to this paper.
A PROOFS
Proof of Theorem 1. We apply self-play policy gradient to optimize θ1 and θ2. Here we consider a projected version, i.e., if at some time t, θ1 or θ2 6∈ [0, 1], we project it to [0, 1] to ensure it is a valid distribution.
We first compute the utility given a pair (θ1, θ2)
U1(θ1, θ2) =aθ1θ2 + cθ1(1− θ2) + b(1− θ1)θ2 + d(1− θ1)(1− θ2) U2(θ1, θ2) =aθ1θ2 + bθ1(1− θ2) + c(1− θ1)θ2 + d(1− θ1)(1− θ2).
We can compute the policy gradient
∇U1(θ1, θ2) =aθ2 + c(1− θ2)− bθ2 − d(1− θ2) = (a+ d− b− c)θ2 + c− d ∇U2(θ1, θ2) =aθ2 − bθ1 + c(1− θ1)− d(1− θ1) = (a+ d− b− c)θ1 + c− d
Recall in order to find the optimal solution both θ1 and θ2 need to increase. Also note that the initial θ1 and θ2 determines the final solution. In particular, only if θ1 and θ2 are increasing at the beginning, they will converge to the desired solution.
To make either θ1 or θ2 increase, we need to have
(a+ d− b− c)θ1 + c− d > 0 or (a+ d− b− c)θ2 + c− d > 0 (1)
Consider the scenario a− b = (d− c). In order to make Inequality equation 1 to hold, we need at least either θ1, θ2 ≥ 11+ .
If we initialize θ1 ∼ [0, 1] and θ2 ∼ [0, 1], the probability of either θ1, θ2 ≥ 11+ is 1− ( 1 1+ )2 =
2 + 2
1+2 + 2 = O ( ).
Proof of Theorem 2. Using a similar observation as in Theorem 1, we know a necessary condition to make PG converge to a sub-optimal NE is
(a+ d− b− c)θ1 + c− d < 0 or (a+ d− b− c)θ2 + c− d < 0.
Based on our generating scheme on a, b, c, d and the initialization scheme on θ1, θ2, we can verify that Therefore, via a union bound, we know
P ((a+ d− b− c)θ1 + c− d < 0 or (a+ d− b− c)θ2 + c− d < 0) ≤ 0.6. (2)
Since each round is independent, the probability that PG fails for all N times is upper bounded by 0.6N . Therefore, the success probability is lower bounded by 1− 0.6N = 1− exp (−Ω (N)).
B ENVIRONMENT DETAILS
B.1 Iterative Stag-Hunt
In Iterative Stag-Hunt, two agents play 10 rounds, that is, both PPO’s trajectory length and episode length are 10. Action of each agent is a 1-dimensional vector, ai = {ti, i ∈ {0, 1}}, where ti = 0 denotes taking Stag action and ti = 1 denotes taking Hare action. Observation of each agent is actions taking by itself and its opponent in the last round, i.e., ori = {a r−1 i , a r−1 1−i ; i ∈ {0, 1}}, where r denotes the playing round. Note that neither agent has taken action at the first round, so the observation oi = {−1,−1}.
B.2 Monster-Hunt
In Monster-Hunt, two agents can move one step in any of the four cardinal directions (Up, Down, Left, Right) at each timestep. Let ai = {ti, i ∈ {0, 1}} denote action of agent i, where ti is a discrete 4-dimensional one-hot vector. The position of each agent can not exceed the border of 5-by-5 grid, where action execution is invalid. One Monster and two apples respawn in the different grids at the initialization. If an agent eats (move over in the grid world) an apple, it can gain 2 points. Sometimes, two agents may try to eat the same apple, the points will be randomly assigned to only one agent. Catching the monster alone causes an agent lose 2 points, but if two agents catch the stag simultaneously, each agent can gain 5 points. At each time step, the monster and apples will respawn randomly elsewhere in the grid world if they are wiped. In addition, the monster chases the agent closest to it at each timestep. The monster may move over the apple during the chase, in this case, the agent will gain the sum of points if it catches the monster and the apple exactly. Each agent’s observation oi is a 10-dimensional vector and formed by concatenating its own position pi, the other agent’s position p1−i, monster’s positionpmonster and sorted apples’ position papple0, papple1, i.e., oi = {pi, p1−i, pmonster, papple0, papple1; i ∈ {0, 1}}, where p = (u, v) denotes the 2-dimensional coordinates in the gridworld.
B.3 Monster-Hunt WITH MORE THAN 2 AGENTS
Here we consider extending RPG to the general setting of N agents. In most of the multi-agent games, the reward function are fully symmetric for the same type of agents. Hence, as long as we can formulate the reward function in a linear form over a feature vector and a shared weight, i.e., R(s, a1, . . . , aN ; i) = φ(s, a1, . . . , aN ; i)
Tw, we can directly apply RPG without any modification by setting R = {Rw : Rw(s, a1, . . . , aN ; i) = φ(s, a1, . . . , aN ; i)Tw}. Note that typically the dimension of the feature vector φ(·) remains fixed w.r.t. different number of agents (N ). For example, in the Agar.io game, no matter how many players are there in the game, the rule of how to get reward bonus and penalties remains the same.
Here, we experiment RPG in Monster-Hunt with 3 agents. The results are shown in Fig. 12. We consider baselines including the standard PG (PG) and population-based training (PBT). RPG reliably discovers a strong cooperation strategy with a substantially higher reward than the baselines.
B.4 Escalation
In Escalation, two agents appear randomly and one grid lights up at the initialization. If two agents step on the lit grid simultaneously, each agent can gain 1 point, and the lit grid will go out with an adjacent grid lighting up. Both agents can gain 1 point again if they step on the next lit grid together. But if one agent steps off the path, the other agent will lose 0.9L points, where L is the current length of stepping together, and the game is over. Another option is that two agents choose to step off the path simultaneously, neither agent will be punished, and the game continues. As the length L of stepping together increases, the cost of betrayal increases linearly. ai = {ti, i ∈ {0, 1}} denotes
action of agent i, where ti is a discrete 4-dimensional one-hot vector. The observation ai of agent i is composed of its own position pi, the other agent’s positionp1−i and the lit grid’s position plit, i.e., oi = {pi, p1−i, plit; i ∈ {0, 1}}, where p = (u, v) denotes the 2-dimensional coordinates in the gridworld. Moreover, we utilize GRU to encode the length L implicitly, instead of observing that explicitly.
B.5 Agar.io
In the original online game Agar.io, multiple players are limited in a circle petri dish. Each player controls one or more balls using only a cursor and 2 keyboard keys "space" and "w". all balls belonging to the player will move forward to where the cursor pointing at. Balls larger than a threshold will split to 2 smaller balls and rush ahead when the player pressing the key "space". Balls larger than another threshold will emit tiny motionless food-like balls when the player pressing "w". Agar.io has many play modes like "Free-For-All" mode (All players fight for their own and can eat each other) and "Team" mode (Players are separated to two groups. They should cooperate with other players in the same group and eat other players belonging to another group).
We simplified settings of the original game Agar.io: Now agents don’t need to emit tiny motionless balls and all fight with each other (FFA mode). The action space of the game is target × {split, no_split}. target ∈ [0, 1]2 means the target position that all balls belonging to the agent move to. binary action split or no_split means whether the player chooses to split, which will cause all balls larger than a threshold split to 2 smaller ones and rush ahead for a short while. These split balls will re-merge after some time, then the agent can split again. When one agent’s ball meets another agent’s ball and the former one is at least 1.2 times larger than the later, the later will be eaten and the former will get all its mass. The reward is defined as the increment of balls’ mass. So every agent’s goal is getting larger by eating others while avoid being eaten. But larger ball moves slower. So it’s really hard to catch smaller balls only by chasing after it. Split will help, but it needs high accuracy to rush to the proper direction. In our experiments, there were 7 agents interacting with each other. 2 agents were learned by our algorithm and would quit the game if all balls were eaten. 5 agents were controlled by a script and would reborn at a random place if all balls were eaten. Learn-based agents were initialized larger than script-based agents so it was basically one-way catching. In this setting, cooperation was the most efficient behavior for learn-based agents to gain positive reward, where they coordinated to surround script-based agents and caught them.
Observation space: We denote partial observation of agent i as oi, which includes global information of the agent (denoted as oi,global) and descriptions of all balls around the agent (including balls owned by the agent, denoted as oi,balls. and oi,balls = {oi,ball,1, oi,ball,2, ..., oi,ball,m}, where oi,ball,j denotes the j-th ball around the agent and there are m observed balls in all). oi,global = {li,obs, wi,obs, pi,center, vi, si,alive, ni,own, ni,script, ni,other, ai,last, ri,max, ri,min,mi} where li,obs, wi,obs (they are both 1D filled with a real number, from here the form like (1D, real) will be used as the abbreviation) are the length and width of the agent’s observation scope, pi,center (2D, real) is its center position, vi (2D, real) is the speed of its center, si,alive(1D, binary) is whether the other learn-based agent is killed, ni,own, ni,script, ni,other(1D, real) are numbers of each type of balls nearby (3 types: belonging to me, or belonging to a script agent, or belonging to another learn-based agent), ai,last(3D, real) is the agent’s last action, ri,max, ri,min(1D, real) are maximal and minimal radius of all balls belonging to the agent. for any j = 1, 2, ...,m, oi,ball,j = {pi,j,relative, pi,j,absolute, vi,j , vi,j,rush, ri,j , log(ri,j), di,j , ei,j,max, ei,j,min, si,j,rem, ti,j}, where pi,j,relative, pi,j,absolute(2D, real) are the ball’s relative and absolute position, vi,j is its speed, vi,j,rush is the ball’s additional rushing speed(when a ball splits to 2 smaller balls, these 2 balls will get additional speed and it’s called vi,j,rush, otherwise vi,j,rush = 0), ri,j(1D, real) is its radius, di,j is the distance between the ball and the center of the agent, ei,j,max, ei,j,min(1D, binary) are whether the ball can be eaten by the maximal or minimal balls of the observing agent, si,j,rem(1D, binary) is whether the ball is able to remerge at present. ti,j(3D, one hot) is the type of the ball.
The script-base agent can automatically chase after and split towards other smaller agents. When facing extreme danger (we define "extreme danger" as larger learn-based agents being very close to it), it will use a 3-step deep-first-search to plan a best way for escape. More details of the script can be seen in our code. We played against the script-base agent using human intelligence for many times and we could never hunt it when having only one ball and rarely catch it by split.
C TRAINING DETAILS
C.1 GRIDWORLD GAMES
In Monster-Hunt and Escalation, agents’ networks are organized by actor-critic (policy-value) architecture. We consider N = 2 agents with a policy profile π = {π0, π1} parameterized by θ = {θ0, θ1}. The policy network πi takes observation oi as input, two hidden layers with 64 units are followed after that, and then outputs action ai. While the value network takes as input observations of two agents, o = {o0, o1} and outputs the V-value of agent i, similarly two hidden layers with 64 units are added before the output.
In Escalation, we also place an additional GRU module before the output in policy network and value network respectively, to infer opponent’s intentions from historical information. Note that 64-dimensional hidden state of GRU h will change if the policy network is updated. In order to both keep forward information and use backward information to compute generalized advantage estimate (GAE) with enough trajectories, we split buffer data into small chunks, e.g., 10 consecutive timesteps as a small data chunk. The initial hidden state hinit, which is the first hidden state h0, is kept for each data chunk, but do another forward pass to re-compute {h1, ..., hM−1}, where M represents the length of one data chunk, and keep buffer-reuse low, e.g., 4 in practice.
Agents in Monster-Hunt and Escalation are trained by PPO with independent parameters. Adam optimizer is used to update network parameters and each experiment is executed for 3 times with random seeds. More optimization hyper-parameter settings are in Tab.6. In addition, Monster-Hunt also utilizes GRU modules to infer opponent’s identity during adaption training and the parallel threads are set to 64.
Count-based exploration: We just add the count-based exploration intrinsic reward rint to the environment reward during training. when the agent’s observation is o, rint = α/no where α is a hyperparameter adjusted properly (0.3 in Monster-Hunt and 1 in Escalation) and no is the number of times the agent have the observation o.
DIAYN: In Monster-Hunt, we use DIAYN to train 10 diverse policy in the first 140k episodes (DIAYN’s discriminator has 3 FC layers with 256, 128, 10 units respectively) and choose the policy which has the best performance in Monster-Hunt’s reward settings to fine-tune in the next 280k episodes. Note that DIAYN doesn’t have a warm-start phase before fine-tuning in its original paper so we didn’t do so as well. Note that in the first unsupervised learning phase, DIAYN does not optimize for any specific reward function. Hence, we did not plot the reward curve for DIAYN in Fig.7 for this phase. Instead, we simply put a dashed line showing the reward of the best selected pair of policies from DIAYN pretraining.
MAVEN: We use the open-sourced implementation of MAVEN from https://github.com/ AnujMahajanOxf/MAVEN.
Population-based training: In each PBT trial, we straightforward train the same amount of parallel PG policies as RPG with different random seeds in each problem respectively and choose the one with best performance as the final policy. Note that the final training curve is averaged over 3 PBT trials.
C.2 Agar.io
In Agar.io, we used PPO as our algorithm and agents’ networks were also organized by actor-critic (policy-value) architecture with a GRU unit (i.e., PPO-GRU). We consider N = 2 agents with a policy profile π = {π0, π1} sharing parameter θ. The policy network πi takes observation oi as input. At the beginning, like (Baker et al., 2019), oi,balls is separated to 3 groups according to balls’ types: oi,ownballs, oi,scriptballs and oi,otherballs. 3 different multi-head attention models with 4 heads and 64 units for transformation of keys, inquiries and values are used to embed information of 3 types of balls respectively, taking corresponding part of oi,balls as values and inquiries and oi,global as keys. Then their outputs are concatenated and transformed by an FC layer with 128 units before being sent to a GRU block with 128 units. After that, the hidden state is copied to 2 heads for policy’s and value’s output. The policy head starts with 2 FC layers both with 128 units and ends with 2 heads to generate discrete(split or no_split) and continuous(target) actions. The value head has 3 FC layers with 128, 128, 1 unit respectively and outputs a real number.
PPO-GRU was trained with 128 parallel environment threads. Agar.io’s episode length was uniformrandomly sampled between 300 and 400 both when training and evaluating. Buffer data were split to small chunks with length = 32 in order to diversify training data and stabilize training process. and the buffer was reused for 4 times to increase data efficiency. Hidden states of each chunk except at the beginning were re-computed after each reuse to sustain PPO’s "on-policy" property as much as possible. Action was repeated for 5 times in the environment whenever the policy was executed and only the observation after the last action repeat was sent to the policy. Each training process started with a curriculum-learning in the first 1.5e7 steps: Speed of script agents was multiplied with x, where x is uniformly random-sampled between max{0, (n− 1e7)/5e6} and min{1,max{0, (n−5e6)/5e6}} at the beginning of each episode, where n was the steps of training. After the curriculum learning, Speed was fixed to the standard. Each experiment was executed for 3 times with different random seeds. Adam optimizer was used to update network parameters. More optimization hyper-parameter settings are in Tab.7.
D ADDITIONAL EXPERIMENT RESULTS
D.1 Monster-Hunt
In Monster-Hunt, we set Cmax = 5 for sampling w. Fig. 13 illustrates the policies discovered by several selected w values, where different strategic modalities can be clearly observed: e.g., with w = [0, 5, 0], agents always avoid monsters and only eat apples. In Fig. 14, it’s worth noting that w = [5, 0, 2] could yield the best policy profile (i.e., two agents move together to hunt the monster.)
and doesn’t even require further fine-tuning with some seeds. But the performance of w = [5, 0, 2] is significantly unstable and it may converge to another NE (i.e., two agents move to a corner and wait for the monster.) with other seeds. So w = [5, 0, 5], which yields stable strong cooperation strategies with different seeds, will be chosen in RR phase when w = [5, 0, 2] performs poorly. We demonstrate the obtained rewards from different policies in Fig. 14, where the policies learned by RPG produces the highest rewards.
D.2 Agar.io
D.2.1 STANDARD SETTING
We sampled 4 differentw and they varied in different degrees of cooperation. We also did experiments using only baseline PG or PG with intrinsic reward generated by Random Network distillation (RND) to compare with RPG. RR lasted for 40M steps, but only the best reward parameter in RR (w = [1, 1]) was warmed up for 3M steps and fine-tuned for 17M steps later. PG and RND were also trained for 60M steps in order to compare with RPGfairly. In Fig. 15, we can see that PG and RND produced very low rewards because they all converged to non-cooperative policies. w = [1, 1] produced highest rewards after RR, and rewards boosted higher after fine-tuning.
D.2.2 AGGRESSIVE SETTING
We sampled 5 different w and their behavior were much more various. the other training settings were the same as standard setting. in Fig. 16, we should notice that simply sharing reward (w = [1, 1]) didn’t get very high reward because attacking each other also benefits each other, so 2 agents just learned to sacrifice, Again, Fig. 16 illustrates that rewards of RPG was far ahead the other policies while both PG and PG+RND failed to learn cooperative strategies.
We also listed all results of Standard and Aggressive setting in Tab. 8 for clearer comparison.
D.2.3 UNIVERSAL REWARD-CONDITIONED POLICY
We also tried to train a universal policy conditioned on w by randomly sampling different w at the beginning of each episode during training rather than fixing different w and training the policy later
on. But as Fig. 17 illustrates, the learning process was very unstable and model performed almost the same under different w due to the intrinsic disadvantage of an on-policy algorithm dealing with multi-tasks: the learning algorithm may pay more effort on w where higher rewards are easier to get but ignore the performance on other w, which made it very hard to get diverse behaviors.
D.3 LEARN ADAPTIVE POLICY
In this section, we add the opponents’ identity ψ in the input of the value network to stable the training process and boost the performance of the adaptive agent. ψ is a C-dimensional one-hot vector, where C denotes the number of opponents.
D.3.1 Iterative Stag-Hunt
In Iterative Stag-Hunt, we randomize the payoff matrix, which is a 4-dimensional vector, and set Cmax = 4 for sampling w. The parallel threads are 512 and the episode length is 10. Other training hyper-parameter settings are the same as Tab.6. Fig 18 describes different w = [a, b, c, d] (i.e.,
[4, 0, 0, 0], [0, 0, 0, 4], [0, 4, 4, 0], [4, 1, 4, 0]) yields different policy profiles. e.g., with w = [0, 0, 0, 4], both agents tend to eat the hare. The original game corresponds to w = [4, 3,−50, 1]. Tab. 9 reveals w = [4, 0, 0, 0] yields the highest reward and reaches the optimal NE without further fine-tuning.
Utilizing 4 different strategies obtained in the RR phase as opponents, we could train an adaptive policy which can make proper decisions according to opponent’s identity. Fig. 19 shows the adaption training curve, we can see that the policy yields adaptive actions stably after 5e4 episodes. At the evaluation stage, we introduce 4 hand-designed opponents to test the performance of the adaptive policy, including Stag opponent (i.e., always hunt the stag), Hare opponent (i.e., always eat the hare), Tit-for-Tat (TFT) opponent (i.e., always hunt the stag at the first step, and then take the action executed by the other agent in the last step), and Random opponent (i.e., randomly choose to hunt the stag or eat the hare at each step). Tab. 10 illustrates that the adaptive policy exploits all hand-designed strategies, including Tit-for-Tat opponent, which significantly differ from the trained opponents.
D.3.2 Monster-Hunt
We use the policy population Π2 trained by 4 w values (i.e., w = [5, 1,−5], w = [4, 2,−2],w = [0, 5, 0],w = [5, 0, 5]) in the RR phase as opponents for training the adaptive policy. In addition, we sample other 4 w values (i.e., w = [5, 0, 0], w = [−5, 5,−5],w = [−5, 0, 5],w = [5,−5, 5]) from Cmax = 5 to train new opponents for evaluation. Fig. 20 shows the adaption training curve of the
monster-hunt game, where the adaptive policy could take actions stably according to the opponent’s identity.
D.3.3 Agar.io
In Agar.io, we used 2 types of policies from RR: w = [1, 0] (i.e. cooperative) and w = [0, 1] (i.e. competitive) as opponents, and trained a adaptive policy facing each opponent with probability=50% in standard setting while only its value head could know the opponent’s type directly. Then we supposed the policy could cooperate or compete properly with corresponding opponent. As Fig. 21 illustrates, the adaptive policy learns to cooperate with cooperative partners while avoid being exploited by competitive partners and exploit both partners.
More details about training and evaluating process: Oracle pure-cooperative policies are learned against a competitive policy for 4e7 steps. So do oracle pure-competitive policies. And the adaptive policy is trained for 6e7 steps. the length of each episode is 350 steps (the half is 175 steps). When evaluating, The policy against the opponent was the adaptive policy in first 175 steps whatever we are testing adaptive or oracle policies. When we tested adaptive policies, the policy against the opponent would keep going for another 175 steps while the opponent would changed to another type and its hidden state would be emptied to zero. When we tested oracle policies, the policy against the opponent would turn to corresponding oracle policies and the opponent would also changed its type while their hidden states were both emptied. | 1. What is the focus of the paper regarding multi-agent games and exploration?
2. What are the strengths of the proposed approach compared to previous methods like policy gradient?
3. What are some concerns or questions the reviewer has regarding the theoretical analysis, specifically Theorems 1 and 2?
4. How does the reviewer assess the effectiveness of the algorithm when applied to different types of opponents?
5. What are some potential limitations of the proposed method, particularly in terms of sampling rewards and its impact on policy diversity? | Review | Review
This paper proposes to use reward randomization to explore the policy space in multi-agent games. The idea is that in most of multi-agent games multiple Nash Equilibriums exist with different payoffs. The goal is to find the NE that provides the highest payoff. Policy Gradient and its variants, which have obtained a lot of practical successes, in general fail to find the NE with the highest payoff. A first approach could be to re-start PG with different initializations for finding different NEs and then selects the best one. In contrast, the authors propose to randomize the reward structure for exploring different policies. Then the policies are optimized on different reward structures with PG. The policy that leads to the highest payoff is selected and then optimized with PG on the original structure of rewards. The authors provide some theoretical results to show that reward randomization has a highest probability to find the best NE than random initializations of PG. The authors also propose to use reward randomization for learning an agent against different type of opponents. The experiments are done on three games and show the interest of their approach in comparison with several baselines.
The paper is well written, proposes interesting ideas supported by analytical and experimental results. However the reviewer has some remarks, concerns and questions.
Concerning Theorem 1, O(\epsilon) for a probability is not a strong result: it can be higher than 1. After looking the proof, the reviewer thinks that it seems possible to provide the right expression of the probability of finding the high payoff NE.
Concerning Theorem 2, the proof is quite informal and the reviewer is not sure that it is correct. In particular, it is not clear if the same condition than in Theorem 1 is necessary: a-b = \epsilon (d-c). In the statement it seems not because a,b,c,d are uniformly sampled and there is no \epsilon in the statement, but the remark stating that RR necessitates at most O(\log1/\epsilon) times to achieves 1-\epsilon suggests that it does. Moreover the reviewer thinks that the proposed analysis (statements of Theorem 1 and 2) will be more convincing if the number of starts, needed by the two approaches for finding w.h.p the high payoff NE, is compared (as you did in the remark).
In Algorithm 2, the authors write that the policy \pi’2 is drawn from \Pi_2, but in the experiments section 5.3, the authors explain that \Pi_2 is carefully built, meaning that the policies in \Pi are chosen to be effective. This step is not in Algorithm 2, which is still correct, but this suggests that if \Pi_2 is not well chosen Algorithm 2 does not work. This leads to my main concern. The rewards seem to be uniformly sampled with the constraint that their sum is no more than C{max}. However, with this kind of uniform sampling the set of games used for exploring policies contains a lot of games that does not respect the constraints induced by the original game M. For instance in stag-hunt we have a \geq b \geq d > c. Using uniform sampling most of the induced games do not respect this reward structure. So it can lead to inefficient policy. For instance if a < b and a < d an efficient policy is to not track the stag and to hunt the hare. The reviewer understands that the diversity of rewards allows the diversity of obtained policies, but the reviewer is wondering if sampling the rewards with respect to the reward constraints of the game is not enough to obtain the diversity of policies. At a minimum, it could be interesting for the reader to have this reasonable baseline. By the way, may be this baseline allows Algorithm 2 working without carefully choosing the set of policies \Pi.
Overall, the paper is interesting, but the reviewer thinks that it could be better. The reviewer can change his mind if his concerns are answered.
After the rebuttal I raised my score. |
ICLR | Title
FASG: Feature Aggregation Self-training GCN for Semi-supervised Node Classification
Abstract
Recently, graph convolutioal networks (GCNs) have achieved significant success in many graph-based learning tasks, especially for node classification, due to its excellent ability in representation learning. Nevertheless, it remains challenging for GCN models to obtain satisfying predictions on graphs where few nodes are with known labels. In this paper, we propose a novel self-training algorithm based on GCN to boost semi-supervised node classification on graphs with little supervised information. Inspired by self-supervision strategy, the proposed method introduces an ingenious checking part to add new nodes as supervision after each training epoch to enhance node prediction. In particular, the embedded checking part is designed based on aggregated features, which is more accurate than previous methods and boosts node classification significantly. The proposed algorithm is validated on three public benchmarks in comparison with several state-of-the-art baseline algorithms, and the results illustrate its excellent performance.
1 INTRODUCTION
Graph convolutional network (GCN) can be seen as the migration of convolutional neural network (CNN) on non-Euclidean structure data. Due to its its excellent ability in representation learning, GCN has achieved significant success in many graph-based learning tasks, including node clustering, graph classification and link prediction (Dwivedi et al., 2020). Kipf & Welling (2016) proposed a GCN mode from the perspective of spectrogram theory and validated its effectiveness on semi-supervised node classification task. Subsequent models such as GraphSAGE (Hamilton et al., 2017), GAT (Veličković et al., 2017), SGCN (Wu et al., 2019) and APPNP (Klicpera et al., 2018) designed more sophisticated neighborhood aggregation functions from spatial or spectral views. These methods obtain much more effective results on semi-supervised node classification than traditional methods such as MLP, DeepWalk (Perozzi et al., 2014), etc. However, the prediction accuracy of such GCN models depends largely on the quantity and quality of supervised information, and it will decrease significantly when the quantity of labeled nodes is quite small (Li et al., 2018). The main reason lies that scarce supervised information is difficult to be spread far away in the graph so that unlabeled nodes are hardly to make full use of supervised information for prediction.
Addressing the above issue, many studies have been devoted to improving the representation ability by designing multi-layer GCN model (Li et al., 2019). However, the representation ability of GCN, as illustrated in Kipf & Welling (2016), can hardly be improved by simply stacking layers just like MLP. Moreover, stacking too many layers tends to cause over-smoothing (Xu et al., 2018) that makes all node embeddings indistinguishable. Alternatively, Li et al. (2018) proposed to improve the reasoning ability of GCN models by applying self-training techniques on the training. Rather than trying to enhance the expressive ability of the model, the self-training strategy prefers to expand the supervised information by adding unlabeled nodes with high confidences to the training set at each round. Following this line, Sun et al. (2019) proposed a multi-stage self-training strategy (M3S) to enrich the training set, which uses deep cluster (Caron et al., 2018) and an aligning mechanism to generate pseudo-labels of nodes for updating of the training set. Later, Zhou et al. (2019) proposed a dynamic self-training framework to continuously refresh the training set by directly using the output of GCN without a checking part. In general these self-training algorithms generate pseudolabels using relatively simple checking mechanism, which may introduce false labels as supervision information and prevent the improvement of prediction accuracy.
In this paper, we propose a novel feature aggregation self-training GCN (FASG) algorithm for semisupervised node classification. We firstly propose a lightweight classifier that applies linear SVM on aggregated node features, and validate that it achieves comparable performance to popular GCN approaches. Furthermore, this classifier is served as a checking part in the multi-round training process to generate pseudo-labels, which are used to filter unreliable nodes when expanding the supervised information. By fully considering the structural information of graph nodes, the newly developed checking part is able to improve the accuracy of the generated pseudo-labels and finally boost the node classification. Finally, we illustrate that the proposed self-training strategy can be integrated with various existing GCN models to improve the prediction performance.
The proposed algorithms is validated in comparison with several state-of-the-art baseline algorithms in three public benchmarks, and the experimental results illustrate that the proposed algorithm outperforms all compared algorithms in general on all benchmarks. We will release the source code upon publication of this paper.
2 RELATED WORK
In the past decade CNN has achieved great success in many areas of machine learning (Krizhevsky et al., 2012; LeCun et al., 1998; Sermanet et al., 2012), but its applications are mainly restricted in dealing with Euclidean structure data (Bruna et al., 2013). Consequently, in recent years more and more studies are devoted to learning the representation on non-Euclidean structure data such as graph.
Graph neural network (GNN) plays an important role in the field of graph representation learning, which can learn the representation of nodes or the whole graph. There are many famous GNN architectures including GCN (Kipf & Welling, 2016), graph recurrent neural network (Hajiramezanali et al., 2019) and graph autoencoder (Pan et al., 2018). As one of the most important architecture of GNN, GCN can be roughly categorized into spectral and spatial approaches. The spectral approaches (Bruna et al., 2013) define convolution operation by Laplacian feature decomposition of the graph, thereby filtering the graph structure in the spectral domain. On the basis of the Chebyshev polynomial (Defferrard et al., 2016) of the graph Laplacian matrix, Kipf & Welling (2016) proposed a much simper GCN framework that limits the filter to the first-order neighbor around each node. On the other hand, spatial approaches implement convolution in spatial domain by defining aggregation functions and transform functions. Notable work includes GraphSAGE (Hamilton et al., 2017) that transformed representation learning into a formal pattern called aggregation and combination and proposed several effective aggregation strategies such as mean-aggregator and max-aggregator, and GAT (Veličković et al., 2017) that focuses on the diversity in connected nodes and leverages selfattention mechanism to learn the important information in neighborhoods. Although these models have achieved far better performance on node classification than traditional methods, they still suffer from scarce supervised information due to the limitation on GCN layers making it hard to transform the supervised information to the entire graph.
Self-training is an ancient and classic topic in the NLP field before deep learning era (Hearst, 1991; Riloff et al., 1999; Rosenberg et al., 2005; Van Asch & Daelemans, 2016), and has recently been introduced into semi-supervised node classification. For making full use of supervised information to improve the prediction accuracy, Li et al. (2018) proposed to improve GCN model by self-training mechanism, which trains and applies a base model in rounds, and adds nodes with high confidences as supervision after each round. The newly added nodes are expect to be beneficial to predict rest nodes so as to enhance the final performance of the model. Following this line, the M3S training algorithm Sun et al. (2019) pretrains a model over the labeled data, and then assigns pseudo-labels to highly confident unlabeled samples that are considered as labeled data for the next round of the training. Later, Zhou et al. (2019) proposed a dynamic self-training GCN that generalizes and simplifies previous by directly using the output of GCN without a checking part to continuously refresh the training set. Similarly, Yang et al. (2020) proposed self-enhanced GNN (SEG) to improve the quality of the input data using the outputs of existing GNN models. These self-training methods expand the labeled node set with relatively simple checking mechanism or even directly using the output of GCN, as a result they may introduce noise as supervision and thus hurt the final prediction performance.
3 PRELIMINARIES
An attributed relational graph of n nodes can be represented by G = (V,E,X), where V = {v1, v2, ..., vn} denotes the set of n nodes, and E = {eij} is the edge set. X = {x1, x2, ...xn} ∈ Rn×d is the set of attributes of all nodes, where xi is the d-dimensional attribute vector associated with node vi. Adjacency matrix A = {aij} ∈ Rn×n denotes the topological structure of graph G, where aij > 0 if there is an edge eij between node vi and vj and aij = 0 otherwise.
For semi-supervised node classification, the node set V can be split into a labeled node set VL ∈ V with attributes XL ∈ X and an unlabeled one VU = V \VL with attributes XU = X\XL. We assume each node belongs to exactly one class, and denote YL = {yi} the ground-truth labels of node set VL where yi is the class label of node vi ∈ VL. The aim of semi-supervised node classification is to learn a classifier from the graph and known node labels YL, and use it to predict labels for unlabeled nodes VU . Define a classifier fθ : (ỸL, ỸU ) ← fθ(X,A, YL), where θ denotes the parameters of model, ỸL and ỸU are the predicted labels of nodes VL and VU respectively. In general, we want the predict labels ỸL is close to the ground-truth labels YL as possible in favor of
θ∗ = argmin θ d(ỸL, YL) = argmin θ d(fθ(X,A, YL), YL), (1)
where d(·, ·) is a distance measure between two label sets. In recent years GCN has become a popular model for semi-supervised node classification, which aggregates a structural feature for each node and use the formed features, rather than the initial attributes X , for label prediction.
4 THE PROPOSED METHOD
In this section, we will elaborate our proposed framework, namely feature aggregation self-training GCN (FASG), for semi-supervised node classification. Firstly, we do analysis of pseudo-labels to explain the importance of checking part in self-training framework. Secondly, we illustrate the design of checking part in our framework and show the superiority of our checking part in graph networks. Then we elaborate every part of the framework and display the FASG training algorithm. Finally we integrate our framework with various GNN models.
4.1 ANALYSIS OF PSEUDO-LABELS
It is common for existing self-training GCN models to assign pseudo-labels to highly confident nodes and expand them as supervised information. Therefore, the quality of the generated pseudolabels is crucial for node classification and the wrongly introduced supervision information may hurt the final prediction performance. Table 1 summarizes the prediction accuracy of the GCN model (Kipf & Welling, 2016) on Cora when it is trained given different ratios of falsely labeled nodes. It is shown that the accuracy decreases significantly with the ratio of bad training nodes increasing.
4.2 CHECKING PART WITH FEATURE AGGREGATION
To guarantee the quality of the generated pseudo-labels, we develop a delicate checking part in the assistance of feature aggregation. The implementation of feature aggregation can be described as Xaggre = D̃−1ÃX , where D is digree matrix of the graph, D̃ = D + I , Ã = A+ I . We use deep graph library DGL (Wang et al., 2019) to implement feature aggregation.
For illustration of the effectiveness of feature aggregation, we apply t-SNE (Maaten & Hinton, 2008) to visualize the aggregated features of each node on the Cora dataset in Fig 1, where feati denotes the features aggregated from the original features for i times. As shown in Fig. 1(a), the original node features are mixed together and are difficult to distinguish. As the fusion of node features going deeper from feat1 to feat4, nodes with the same label tend to aggregate into clusters in 2-D space. However, the cluster boundaries become blur again after the aggregation goes up to a certain level, e.g.. feat15 and feat20.
Furthermore, we apply linear svm (Cortes & Vapnik, 1995) on aggregated features feat5 to form a classifier, and report its performance in Table 2 in comparison with several GCN models on three citation networks, Cora, CiteSeer and PubMed. Clearly, this relatively simple classifier is able to achieve comparable performance with popular GCN models due to the representation ability of aggregated features.
As for the self-training mechanism, we employ the above classifier that combines feature aggregation with line svm to serve as check part for generation of pseudo-labels of nodes. In Fig. 2, we compare the quality of pseudo-labels generated by different checking mechanisms including plain self-training method (Li et al., 2018), deep cluster Sun et al. (2019) and the proposed checking part with feature aggregation. It is shown that our method introduces less bad training nodes than the compared methods in different label rates on both Cora and CiteSeer, which accounts for the better performance on node classification shown in Sec. 5.
4.3 MULTI-STAGE SELF-TRAINING FRAMEWORK
The overall framework of the proposed feature aggregation self-training GCN (FASG) algorithm is illustrated in Fig. 3. Instead of using deep cluster and aligning mechanism, we firstly apply feature aggregation and linear SVM classifier to build a checking part. After each training round we use
both the output GCN confidence and the checking part to choose reliable nodes as supervised ones at the next round. The training iterates K rounds and then output the final predictions of unlabeled nodes.
The proposed FASG algorithm is described in details in Algo. 1. At the beginning, we concatenate features from feat0 to feat10 and put them into a linear SVM to build the checking part. At each round if the output of a node predicted by the GCN model is consistent with its pseudo-label generated by the checking part, then we tend to expand this node with high certainty to the supervised set. To avoid expanding too much nodes at one time, only t nodes with top confidence are checked at each round. Note that the base GCN model in Algo. 1 is not specified, i.e. the proposed FASG algorithms can be integrated with various GCN models to boost node classification, of which the effectiveness is validated in Table. 6.
Algorithm 1 The FASG Algorithm Input:
G = (V,E,X): the input graph. A: the adjacent matrix of graph G. L,U : the labeled and unlabeled node set of respectively. GCNconv(·): the base GCN model. SVM(·): the linear SVM classifier. K: the number of self-training rounds.
Output: predictions of all the unlabeled nodes ỸU ;
1: Conduct feature aggregation to generate feat1, . . . , feat9. 2: Form concatenation feat← [feat1, . . . , feat9]. 3: Generate pseudo-labels Y ′U = SVM(feat, L, U). 4: Let L′ ← L, U ′ ←= U ; 5: for k = 1 to K do 6: Train GCN model and get predictions and confidence matix:
ỸU ,M = GCNconv(A,X,L ′, U ′).
7: for each class j do 8: Select t nodes {vj1, . . . , vjt} in U ′ with top confidences. 9: for i = 1 to t do
10: if ỹji equals y′ji then 11: L′ ← L′ ∪ {vji}, U ′ ← U ′\{vji}. 12: end if 13: end for 14: end for 15: end for 16: Compute the final predictions ỸU = GCNconv(A,X,L′, U ′). 17: return ỸU
5 EXPERIMENTS
5.1 EXPERIMENTAL SETTINGS
We conduct experiments on three open graph datasets derived from citation networks (including Cora, CiteSeer, PubMed) (Sen et al., 2008) for the semi-supervised node classification task. In these citation networks nodes denote documents whose features are formed by bag-of-words representations, and edges denote their relationships with labels indicating what field the corresponding documents belong to.
Though our framework can be integrated with various GNN models, we choose plain GCN (Kipf & Welling, 2016) as the base model in Algo. 1 in this experiment. Specifically, we set the number of GCN layers n layers=2, learning rate lr=1e-2, training epochs=600, weight decay=5e-4 for the GCN model, and fix t = 10 in Algo. 1. Similar to the M3S algorithm (Sun et al., 2019), we also regard the option of rounds K as a hyper-parameter and assign the most suitable K for each testing of different label rate. We choose K as 40,10,5,4,4 for Cora dataset, 30,25,15,10,10 for CiteSeer and 5,4,3 for PubMed. The label rate indicates the amount of labeled nodes, which are randomly chosen from the whole node set under an extra measures that is to guarantee the balance between different classes. For each trial we repeat the testing 10 times and report the mean accuracy.
5.2 COMPARISON WITH BASELINE ALGORITHMS
The compared baseline algorithms in this experiment include traditional learning method such as Node2Vec (Grover & Leskovec, 2016) ,LP (Wu et al., 2012) and classic GNN approach such as GCN (Kipf & Welling, 2016), GAT (Veličković et al., 2017) and ChebNet (Defferrard et al., 2016). We also include Co-training and Self-training proposed by Li et al. (2018) and other self-learning based approaches MultiStage (Sun et al., 2019), M3S (Sun et al., 2019) as baseline. The relevant experimental settings and results are all taken from original papers.
The comparison of these algorithms on the three benchmarks is summarized in Tables 3, 4 and 5 respectively. It is observed that GNN-based approaches surpass traditional learning approaches in general on all three datasets. By adopting multi-rounds training strategy and expanding the supervised information iteratively, the algorithms based on self-training mechanism achieve remarkable improvement in prediction accuracy, especially when the label rate is quite small. Furthermore, the proposed FASG algorithm outperforms all baseline algorithms in all tested scenarios. The superiority of our method derives from the delicate checking part based feature aggregation, which is able to guarantee the high quality of the expanded supervised information as illustrated in Fig. 2.
5.3.1 THE NUMBER OF TRAINING ROUNDS
5.3 ABLATION STUDIES
In order to reveal how our algorithm is affected by the number of training rounds K, we report the numbers of newly added nodes, bad training nodes and the prediction accuracy on the CiteSeer dataset for different label rates with increasingK from 0 to 50. Note that whenK is 0, the framework degrades to the plain GCN model. As shown in Fig. 4(a), accuracies grow rapidly during the first few rounds for all label rates. For a small label rate (e.g. 0.005), the accuracy tends to grow continuously withK increasing. On the contrary, for a relatively large label rate (e.g. 0.04) the accuracy will reach the peak rapidly with a small K and saturate afterward.
Fig. 4(b) shows the number of newly added nodes after each training round, which is consistent with the change of the accuracy in Fig. 4(a). There are numbers of newly nodes that are expanded as supervision information at each round for a small label rate, so the accuracy is improved continuously. While, for a relatively large label rate, the number of newly added nodes drops markedly after a few training rounds, which results in the saturation of the accuracy.
5.3.2 INTEGRATION WITH DIFFERENT BASE GCN MODELS
As described in Sec. 4.3, the proposed FASG algorithm can be integrated with various base GCN models to improve their prediction performances. For validation, we combine FASG with several popular GCN models, and report their prediction accuracy on the CiteSeer dataset with label rate 0.5% in Table 6, where GS-M and GS-P represent GraphSage with mean and maxpool aggregator respectively. It is shown that all tested base GCN models achieve similar performances, and they are all benefitted significantly in prediction accuracy by applying our FASG to expand supervised information iteratively.
6 CONCLUSION
In this paper, we firstly analyzed the limitations of plain GCN models in dealing with semisupervised node classification tasks, and subsequently proposed a feature aggregation self-training GCN algorithm aiming to improve the prediction accuracy. Our algorithm iteratively expand reliable nodes into the supervised set by checking both the GCN outputs and the pseudo-labels of nodes that are generated through applying a linear SVM classifier on the aggregated features. This checking mechanism is able to provide supervised information of better quality than previous methods and boosts the final node classification significantly. In experiments, the proposed algorithm outperforms state-of-the-art baseline algorithms in general on all tested benchmarks, especially when the ratio of labeled nodes is quite small. | 1. What is the focus of the paper regarding node classification in graphs?
2. What are the strengths of the proposed approach, particularly in introducing new methods?
3. What are the weaknesses of the paper, especially regarding its contributions and improvements?
4. Do you have any concerns about the experiments and comparisons with other works?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Review | Review
This manuscript proposes FASG, a self-training model with GCN to improve node classification in graph. FASG introduces a checking part to add new nodes as supervision to enhance classification model. Experiments on several datasets show that FASG is better than some baseline methods.
Pros
The problem is important.
The presentation is good.
Introduce a new method to generate nodes with pseudo-labels.
Cons
The novelty of this work limited. According to my understanding, the contribution lies in the checking part and the checking part just uses GCN and SVM for generating pseudo-labeled nodes. In my opinion, it is a general choice and the novelty is limited.
The improvement over baseline methods is not significant. Most improvement percentages are smaller than 1% especially in Cora and PubMed datasets.
Experiments should be improved. Baseline methods are relatively weak, there are many work [1,2,3] for graph pre-training or self-training that could be compared and discussed.
[1] Strategies for Pre-training Graph Neural Networks, ICLR 2020
[2] GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training, KDD 2020
[3] Gpt-gnn: Generative pre-training of graph neural networks, KDD 2020 |
ICLR | Title
FASG: Feature Aggregation Self-training GCN for Semi-supervised Node Classification
Abstract
Recently, graph convolutioal networks (GCNs) have achieved significant success in many graph-based learning tasks, especially for node classification, due to its excellent ability in representation learning. Nevertheless, it remains challenging for GCN models to obtain satisfying predictions on graphs where few nodes are with known labels. In this paper, we propose a novel self-training algorithm based on GCN to boost semi-supervised node classification on graphs with little supervised information. Inspired by self-supervision strategy, the proposed method introduces an ingenious checking part to add new nodes as supervision after each training epoch to enhance node prediction. In particular, the embedded checking part is designed based on aggregated features, which is more accurate than previous methods and boosts node classification significantly. The proposed algorithm is validated on three public benchmarks in comparison with several state-of-the-art baseline algorithms, and the results illustrate its excellent performance.
1 INTRODUCTION
Graph convolutional network (GCN) can be seen as the migration of convolutional neural network (CNN) on non-Euclidean structure data. Due to its its excellent ability in representation learning, GCN has achieved significant success in many graph-based learning tasks, including node clustering, graph classification and link prediction (Dwivedi et al., 2020). Kipf & Welling (2016) proposed a GCN mode from the perspective of spectrogram theory and validated its effectiveness on semi-supervised node classification task. Subsequent models such as GraphSAGE (Hamilton et al., 2017), GAT (Veličković et al., 2017), SGCN (Wu et al., 2019) and APPNP (Klicpera et al., 2018) designed more sophisticated neighborhood aggregation functions from spatial or spectral views. These methods obtain much more effective results on semi-supervised node classification than traditional methods such as MLP, DeepWalk (Perozzi et al., 2014), etc. However, the prediction accuracy of such GCN models depends largely on the quantity and quality of supervised information, and it will decrease significantly when the quantity of labeled nodes is quite small (Li et al., 2018). The main reason lies that scarce supervised information is difficult to be spread far away in the graph so that unlabeled nodes are hardly to make full use of supervised information for prediction.
Addressing the above issue, many studies have been devoted to improving the representation ability by designing multi-layer GCN model (Li et al., 2019). However, the representation ability of GCN, as illustrated in Kipf & Welling (2016), can hardly be improved by simply stacking layers just like MLP. Moreover, stacking too many layers tends to cause over-smoothing (Xu et al., 2018) that makes all node embeddings indistinguishable. Alternatively, Li et al. (2018) proposed to improve the reasoning ability of GCN models by applying self-training techniques on the training. Rather than trying to enhance the expressive ability of the model, the self-training strategy prefers to expand the supervised information by adding unlabeled nodes with high confidences to the training set at each round. Following this line, Sun et al. (2019) proposed a multi-stage self-training strategy (M3S) to enrich the training set, which uses deep cluster (Caron et al., 2018) and an aligning mechanism to generate pseudo-labels of nodes for updating of the training set. Later, Zhou et al. (2019) proposed a dynamic self-training framework to continuously refresh the training set by directly using the output of GCN without a checking part. In general these self-training algorithms generate pseudolabels using relatively simple checking mechanism, which may introduce false labels as supervision information and prevent the improvement of prediction accuracy.
In this paper, we propose a novel feature aggregation self-training GCN (FASG) algorithm for semisupervised node classification. We firstly propose a lightweight classifier that applies linear SVM on aggregated node features, and validate that it achieves comparable performance to popular GCN approaches. Furthermore, this classifier is served as a checking part in the multi-round training process to generate pseudo-labels, which are used to filter unreliable nodes when expanding the supervised information. By fully considering the structural information of graph nodes, the newly developed checking part is able to improve the accuracy of the generated pseudo-labels and finally boost the node classification. Finally, we illustrate that the proposed self-training strategy can be integrated with various existing GCN models to improve the prediction performance.
The proposed algorithms is validated in comparison with several state-of-the-art baseline algorithms in three public benchmarks, and the experimental results illustrate that the proposed algorithm outperforms all compared algorithms in general on all benchmarks. We will release the source code upon publication of this paper.
2 RELATED WORK
In the past decade CNN has achieved great success in many areas of machine learning (Krizhevsky et al., 2012; LeCun et al., 1998; Sermanet et al., 2012), but its applications are mainly restricted in dealing with Euclidean structure data (Bruna et al., 2013). Consequently, in recent years more and more studies are devoted to learning the representation on non-Euclidean structure data such as graph.
Graph neural network (GNN) plays an important role in the field of graph representation learning, which can learn the representation of nodes or the whole graph. There are many famous GNN architectures including GCN (Kipf & Welling, 2016), graph recurrent neural network (Hajiramezanali et al., 2019) and graph autoencoder (Pan et al., 2018). As one of the most important architecture of GNN, GCN can be roughly categorized into spectral and spatial approaches. The spectral approaches (Bruna et al., 2013) define convolution operation by Laplacian feature decomposition of the graph, thereby filtering the graph structure in the spectral domain. On the basis of the Chebyshev polynomial (Defferrard et al., 2016) of the graph Laplacian matrix, Kipf & Welling (2016) proposed a much simper GCN framework that limits the filter to the first-order neighbor around each node. On the other hand, spatial approaches implement convolution in spatial domain by defining aggregation functions and transform functions. Notable work includes GraphSAGE (Hamilton et al., 2017) that transformed representation learning into a formal pattern called aggregation and combination and proposed several effective aggregation strategies such as mean-aggregator and max-aggregator, and GAT (Veličković et al., 2017) that focuses on the diversity in connected nodes and leverages selfattention mechanism to learn the important information in neighborhoods. Although these models have achieved far better performance on node classification than traditional methods, they still suffer from scarce supervised information due to the limitation on GCN layers making it hard to transform the supervised information to the entire graph.
Self-training is an ancient and classic topic in the NLP field before deep learning era (Hearst, 1991; Riloff et al., 1999; Rosenberg et al., 2005; Van Asch & Daelemans, 2016), and has recently been introduced into semi-supervised node classification. For making full use of supervised information to improve the prediction accuracy, Li et al. (2018) proposed to improve GCN model by self-training mechanism, which trains and applies a base model in rounds, and adds nodes with high confidences as supervision after each round. The newly added nodes are expect to be beneficial to predict rest nodes so as to enhance the final performance of the model. Following this line, the M3S training algorithm Sun et al. (2019) pretrains a model over the labeled data, and then assigns pseudo-labels to highly confident unlabeled samples that are considered as labeled data for the next round of the training. Later, Zhou et al. (2019) proposed a dynamic self-training GCN that generalizes and simplifies previous by directly using the output of GCN without a checking part to continuously refresh the training set. Similarly, Yang et al. (2020) proposed self-enhanced GNN (SEG) to improve the quality of the input data using the outputs of existing GNN models. These self-training methods expand the labeled node set with relatively simple checking mechanism or even directly using the output of GCN, as a result they may introduce noise as supervision and thus hurt the final prediction performance.
3 PRELIMINARIES
An attributed relational graph of n nodes can be represented by G = (V,E,X), where V = {v1, v2, ..., vn} denotes the set of n nodes, and E = {eij} is the edge set. X = {x1, x2, ...xn} ∈ Rn×d is the set of attributes of all nodes, where xi is the d-dimensional attribute vector associated with node vi. Adjacency matrix A = {aij} ∈ Rn×n denotes the topological structure of graph G, where aij > 0 if there is an edge eij between node vi and vj and aij = 0 otherwise.
For semi-supervised node classification, the node set V can be split into a labeled node set VL ∈ V with attributes XL ∈ X and an unlabeled one VU = V \VL with attributes XU = X\XL. We assume each node belongs to exactly one class, and denote YL = {yi} the ground-truth labels of node set VL where yi is the class label of node vi ∈ VL. The aim of semi-supervised node classification is to learn a classifier from the graph and known node labels YL, and use it to predict labels for unlabeled nodes VU . Define a classifier fθ : (ỸL, ỸU ) ← fθ(X,A, YL), where θ denotes the parameters of model, ỸL and ỸU are the predicted labels of nodes VL and VU respectively. In general, we want the predict labels ỸL is close to the ground-truth labels YL as possible in favor of
θ∗ = argmin θ d(ỸL, YL) = argmin θ d(fθ(X,A, YL), YL), (1)
where d(·, ·) is a distance measure between two label sets. In recent years GCN has become a popular model for semi-supervised node classification, which aggregates a structural feature for each node and use the formed features, rather than the initial attributes X , for label prediction.
4 THE PROPOSED METHOD
In this section, we will elaborate our proposed framework, namely feature aggregation self-training GCN (FASG), for semi-supervised node classification. Firstly, we do analysis of pseudo-labels to explain the importance of checking part in self-training framework. Secondly, we illustrate the design of checking part in our framework and show the superiority of our checking part in graph networks. Then we elaborate every part of the framework and display the FASG training algorithm. Finally we integrate our framework with various GNN models.
4.1 ANALYSIS OF PSEUDO-LABELS
It is common for existing self-training GCN models to assign pseudo-labels to highly confident nodes and expand them as supervised information. Therefore, the quality of the generated pseudolabels is crucial for node classification and the wrongly introduced supervision information may hurt the final prediction performance. Table 1 summarizes the prediction accuracy of the GCN model (Kipf & Welling, 2016) on Cora when it is trained given different ratios of falsely labeled nodes. It is shown that the accuracy decreases significantly with the ratio of bad training nodes increasing.
4.2 CHECKING PART WITH FEATURE AGGREGATION
To guarantee the quality of the generated pseudo-labels, we develop a delicate checking part in the assistance of feature aggregation. The implementation of feature aggregation can be described as Xaggre = D̃−1ÃX , where D is digree matrix of the graph, D̃ = D + I , Ã = A+ I . We use deep graph library DGL (Wang et al., 2019) to implement feature aggregation.
For illustration of the effectiveness of feature aggregation, we apply t-SNE (Maaten & Hinton, 2008) to visualize the aggregated features of each node on the Cora dataset in Fig 1, where feati denotes the features aggregated from the original features for i times. As shown in Fig. 1(a), the original node features are mixed together and are difficult to distinguish. As the fusion of node features going deeper from feat1 to feat4, nodes with the same label tend to aggregate into clusters in 2-D space. However, the cluster boundaries become blur again after the aggregation goes up to a certain level, e.g.. feat15 and feat20.
Furthermore, we apply linear svm (Cortes & Vapnik, 1995) on aggregated features feat5 to form a classifier, and report its performance in Table 2 in comparison with several GCN models on three citation networks, Cora, CiteSeer and PubMed. Clearly, this relatively simple classifier is able to achieve comparable performance with popular GCN models due to the representation ability of aggregated features.
As for the self-training mechanism, we employ the above classifier that combines feature aggregation with line svm to serve as check part for generation of pseudo-labels of nodes. In Fig. 2, we compare the quality of pseudo-labels generated by different checking mechanisms including plain self-training method (Li et al., 2018), deep cluster Sun et al. (2019) and the proposed checking part with feature aggregation. It is shown that our method introduces less bad training nodes than the compared methods in different label rates on both Cora and CiteSeer, which accounts for the better performance on node classification shown in Sec. 5.
4.3 MULTI-STAGE SELF-TRAINING FRAMEWORK
The overall framework of the proposed feature aggregation self-training GCN (FASG) algorithm is illustrated in Fig. 3. Instead of using deep cluster and aligning mechanism, we firstly apply feature aggregation and linear SVM classifier to build a checking part. After each training round we use
both the output GCN confidence and the checking part to choose reliable nodes as supervised ones at the next round. The training iterates K rounds and then output the final predictions of unlabeled nodes.
The proposed FASG algorithm is described in details in Algo. 1. At the beginning, we concatenate features from feat0 to feat10 and put them into a linear SVM to build the checking part. At each round if the output of a node predicted by the GCN model is consistent with its pseudo-label generated by the checking part, then we tend to expand this node with high certainty to the supervised set. To avoid expanding too much nodes at one time, only t nodes with top confidence are checked at each round. Note that the base GCN model in Algo. 1 is not specified, i.e. the proposed FASG algorithms can be integrated with various GCN models to boost node classification, of which the effectiveness is validated in Table. 6.
Algorithm 1 The FASG Algorithm Input:
G = (V,E,X): the input graph. A: the adjacent matrix of graph G. L,U : the labeled and unlabeled node set of respectively. GCNconv(·): the base GCN model. SVM(·): the linear SVM classifier. K: the number of self-training rounds.
Output: predictions of all the unlabeled nodes ỸU ;
1: Conduct feature aggregation to generate feat1, . . . , feat9. 2: Form concatenation feat← [feat1, . . . , feat9]. 3: Generate pseudo-labels Y ′U = SVM(feat, L, U). 4: Let L′ ← L, U ′ ←= U ; 5: for k = 1 to K do 6: Train GCN model and get predictions and confidence matix:
ỸU ,M = GCNconv(A,X,L ′, U ′).
7: for each class j do 8: Select t nodes {vj1, . . . , vjt} in U ′ with top confidences. 9: for i = 1 to t do
10: if ỹji equals y′ji then 11: L′ ← L′ ∪ {vji}, U ′ ← U ′\{vji}. 12: end if 13: end for 14: end for 15: end for 16: Compute the final predictions ỸU = GCNconv(A,X,L′, U ′). 17: return ỸU
5 EXPERIMENTS
5.1 EXPERIMENTAL SETTINGS
We conduct experiments on three open graph datasets derived from citation networks (including Cora, CiteSeer, PubMed) (Sen et al., 2008) for the semi-supervised node classification task. In these citation networks nodes denote documents whose features are formed by bag-of-words representations, and edges denote their relationships with labels indicating what field the corresponding documents belong to.
Though our framework can be integrated with various GNN models, we choose plain GCN (Kipf & Welling, 2016) as the base model in Algo. 1 in this experiment. Specifically, we set the number of GCN layers n layers=2, learning rate lr=1e-2, training epochs=600, weight decay=5e-4 for the GCN model, and fix t = 10 in Algo. 1. Similar to the M3S algorithm (Sun et al., 2019), we also regard the option of rounds K as a hyper-parameter and assign the most suitable K for each testing of different label rate. We choose K as 40,10,5,4,4 for Cora dataset, 30,25,15,10,10 for CiteSeer and 5,4,3 for PubMed. The label rate indicates the amount of labeled nodes, which are randomly chosen from the whole node set under an extra measures that is to guarantee the balance between different classes. For each trial we repeat the testing 10 times and report the mean accuracy.
5.2 COMPARISON WITH BASELINE ALGORITHMS
The compared baseline algorithms in this experiment include traditional learning method such as Node2Vec (Grover & Leskovec, 2016) ,LP (Wu et al., 2012) and classic GNN approach such as GCN (Kipf & Welling, 2016), GAT (Veličković et al., 2017) and ChebNet (Defferrard et al., 2016). We also include Co-training and Self-training proposed by Li et al. (2018) and other self-learning based approaches MultiStage (Sun et al., 2019), M3S (Sun et al., 2019) as baseline. The relevant experimental settings and results are all taken from original papers.
The comparison of these algorithms on the three benchmarks is summarized in Tables 3, 4 and 5 respectively. It is observed that GNN-based approaches surpass traditional learning approaches in general on all three datasets. By adopting multi-rounds training strategy and expanding the supervised information iteratively, the algorithms based on self-training mechanism achieve remarkable improvement in prediction accuracy, especially when the label rate is quite small. Furthermore, the proposed FASG algorithm outperforms all baseline algorithms in all tested scenarios. The superiority of our method derives from the delicate checking part based feature aggregation, which is able to guarantee the high quality of the expanded supervised information as illustrated in Fig. 2.
5.3.1 THE NUMBER OF TRAINING ROUNDS
5.3 ABLATION STUDIES
In order to reveal how our algorithm is affected by the number of training rounds K, we report the numbers of newly added nodes, bad training nodes and the prediction accuracy on the CiteSeer dataset for different label rates with increasingK from 0 to 50. Note that whenK is 0, the framework degrades to the plain GCN model. As shown in Fig. 4(a), accuracies grow rapidly during the first few rounds for all label rates. For a small label rate (e.g. 0.005), the accuracy tends to grow continuously withK increasing. On the contrary, for a relatively large label rate (e.g. 0.04) the accuracy will reach the peak rapidly with a small K and saturate afterward.
Fig. 4(b) shows the number of newly added nodes after each training round, which is consistent with the change of the accuracy in Fig. 4(a). There are numbers of newly nodes that are expanded as supervision information at each round for a small label rate, so the accuracy is improved continuously. While, for a relatively large label rate, the number of newly added nodes drops markedly after a few training rounds, which results in the saturation of the accuracy.
5.3.2 INTEGRATION WITH DIFFERENT BASE GCN MODELS
As described in Sec. 4.3, the proposed FASG algorithm can be integrated with various base GCN models to improve their prediction performances. For validation, we combine FASG with several popular GCN models, and report their prediction accuracy on the CiteSeer dataset with label rate 0.5% in Table 6, where GS-M and GS-P represent GraphSage with mean and maxpool aggregator respectively. It is shown that all tested base GCN models achieve similar performances, and they are all benefitted significantly in prediction accuracy by applying our FASG to expand supervised information iteratively.
6 CONCLUSION
In this paper, we firstly analyzed the limitations of plain GCN models in dealing with semisupervised node classification tasks, and subsequently proposed a feature aggregation self-training GCN algorithm aiming to improve the prediction accuracy. Our algorithm iteratively expand reliable nodes into the supervised set by checking both the GCN outputs and the pseudo-labels of nodes that are generated through applying a linear SVM classifier on the aggregated features. This checking mechanism is able to provide supervised information of better quality than previous methods and boosts the final node classification significantly. In experiments, the proposed algorithm outperforms state-of-the-art baseline algorithms in general on all tested benchmarks, especially when the ratio of labeled nodes is quite small. | 1. What is the main contribution of the paper in semi-supervised learning?
2. What are the strengths and weaknesses of the proposed method compared to other GNN-based approaches?
3. How does the reviewer assess the effectiveness of the "checking part" in ensuring high-quality pseudo labels?
4. Do you have any concerns regarding the choice of "t" in the method and its impact on performance?
5. How does the reviewer evaluate the clarity and quality of the writing in section 4.1? | Review | Review
Summary
This paper proposes a self-training based semi-supervised framework for node classification using Graph Neural Networks when the amount of labelled data is very limited. Self-training is performed by incorporating highly confident samples with their corresponding predicted class as the pseudo label. Authors show that incorporation of correct pseudo labels is a crucial step as the performance degrades rapidly with the incorporation of wrong labels. This work ensures high quality of pseudo labels by a "checking part" with feature aggregation. Aggregated features with linear SVM performs comparably with GNN methods.
Strong Points
Well motivated paper with good performance
Proposed approach uses self-training based approach with linear SVM which performs well when the amount of labelled data is scarce.
The framework can be easily incorporated with any GNN based approaches.
Weak Points
I am little surprised that the feature aggregation performs comparably to GCN considering the fact that it is essentially the forward pass of GCN. The improvement could be influenced more because of incorporating the extra examples using self-training than feature aggregation.
Generally easy (very similar to training distribution) examples end up with high confidence scores. Incorporating such nodes might make hard nodes (closer to decision boundary) harder to classify, thoughts on this point is required.
Choice of "t" seemed adhoc, incorporation of how "t" impacted the performance of the model would be interesting to see.
Other comments
Section 4.1: It was not clear to me how exactly the "bad nodes" were introduced to the model. A little more details would be helpful for readers. |
ICLR | Title
FASG: Feature Aggregation Self-training GCN for Semi-supervised Node Classification
Abstract
Recently, graph convolutioal networks (GCNs) have achieved significant success in many graph-based learning tasks, especially for node classification, due to its excellent ability in representation learning. Nevertheless, it remains challenging for GCN models to obtain satisfying predictions on graphs where few nodes are with known labels. In this paper, we propose a novel self-training algorithm based on GCN to boost semi-supervised node classification on graphs with little supervised information. Inspired by self-supervision strategy, the proposed method introduces an ingenious checking part to add new nodes as supervision after each training epoch to enhance node prediction. In particular, the embedded checking part is designed based on aggregated features, which is more accurate than previous methods and boosts node classification significantly. The proposed algorithm is validated on three public benchmarks in comparison with several state-of-the-art baseline algorithms, and the results illustrate its excellent performance.
1 INTRODUCTION
Graph convolutional network (GCN) can be seen as the migration of convolutional neural network (CNN) on non-Euclidean structure data. Due to its its excellent ability in representation learning, GCN has achieved significant success in many graph-based learning tasks, including node clustering, graph classification and link prediction (Dwivedi et al., 2020). Kipf & Welling (2016) proposed a GCN mode from the perspective of spectrogram theory and validated its effectiveness on semi-supervised node classification task. Subsequent models such as GraphSAGE (Hamilton et al., 2017), GAT (Veličković et al., 2017), SGCN (Wu et al., 2019) and APPNP (Klicpera et al., 2018) designed more sophisticated neighborhood aggregation functions from spatial or spectral views. These methods obtain much more effective results on semi-supervised node classification than traditional methods such as MLP, DeepWalk (Perozzi et al., 2014), etc. However, the prediction accuracy of such GCN models depends largely on the quantity and quality of supervised information, and it will decrease significantly when the quantity of labeled nodes is quite small (Li et al., 2018). The main reason lies that scarce supervised information is difficult to be spread far away in the graph so that unlabeled nodes are hardly to make full use of supervised information for prediction.
Addressing the above issue, many studies have been devoted to improving the representation ability by designing multi-layer GCN model (Li et al., 2019). However, the representation ability of GCN, as illustrated in Kipf & Welling (2016), can hardly be improved by simply stacking layers just like MLP. Moreover, stacking too many layers tends to cause over-smoothing (Xu et al., 2018) that makes all node embeddings indistinguishable. Alternatively, Li et al. (2018) proposed to improve the reasoning ability of GCN models by applying self-training techniques on the training. Rather than trying to enhance the expressive ability of the model, the self-training strategy prefers to expand the supervised information by adding unlabeled nodes with high confidences to the training set at each round. Following this line, Sun et al. (2019) proposed a multi-stage self-training strategy (M3S) to enrich the training set, which uses deep cluster (Caron et al., 2018) and an aligning mechanism to generate pseudo-labels of nodes for updating of the training set. Later, Zhou et al. (2019) proposed a dynamic self-training framework to continuously refresh the training set by directly using the output of GCN without a checking part. In general these self-training algorithms generate pseudolabels using relatively simple checking mechanism, which may introduce false labels as supervision information and prevent the improvement of prediction accuracy.
In this paper, we propose a novel feature aggregation self-training GCN (FASG) algorithm for semisupervised node classification. We firstly propose a lightweight classifier that applies linear SVM on aggregated node features, and validate that it achieves comparable performance to popular GCN approaches. Furthermore, this classifier is served as a checking part in the multi-round training process to generate pseudo-labels, which are used to filter unreliable nodes when expanding the supervised information. By fully considering the structural information of graph nodes, the newly developed checking part is able to improve the accuracy of the generated pseudo-labels and finally boost the node classification. Finally, we illustrate that the proposed self-training strategy can be integrated with various existing GCN models to improve the prediction performance.
The proposed algorithms is validated in comparison with several state-of-the-art baseline algorithms in three public benchmarks, and the experimental results illustrate that the proposed algorithm outperforms all compared algorithms in general on all benchmarks. We will release the source code upon publication of this paper.
2 RELATED WORK
In the past decade CNN has achieved great success in many areas of machine learning (Krizhevsky et al., 2012; LeCun et al., 1998; Sermanet et al., 2012), but its applications are mainly restricted in dealing with Euclidean structure data (Bruna et al., 2013). Consequently, in recent years more and more studies are devoted to learning the representation on non-Euclidean structure data such as graph.
Graph neural network (GNN) plays an important role in the field of graph representation learning, which can learn the representation of nodes or the whole graph. There are many famous GNN architectures including GCN (Kipf & Welling, 2016), graph recurrent neural network (Hajiramezanali et al., 2019) and graph autoencoder (Pan et al., 2018). As one of the most important architecture of GNN, GCN can be roughly categorized into spectral and spatial approaches. The spectral approaches (Bruna et al., 2013) define convolution operation by Laplacian feature decomposition of the graph, thereby filtering the graph structure in the spectral domain. On the basis of the Chebyshev polynomial (Defferrard et al., 2016) of the graph Laplacian matrix, Kipf & Welling (2016) proposed a much simper GCN framework that limits the filter to the first-order neighbor around each node. On the other hand, spatial approaches implement convolution in spatial domain by defining aggregation functions and transform functions. Notable work includes GraphSAGE (Hamilton et al., 2017) that transformed representation learning into a formal pattern called aggregation and combination and proposed several effective aggregation strategies such as mean-aggregator and max-aggregator, and GAT (Veličković et al., 2017) that focuses on the diversity in connected nodes and leverages selfattention mechanism to learn the important information in neighborhoods. Although these models have achieved far better performance on node classification than traditional methods, they still suffer from scarce supervised information due to the limitation on GCN layers making it hard to transform the supervised information to the entire graph.
Self-training is an ancient and classic topic in the NLP field before deep learning era (Hearst, 1991; Riloff et al., 1999; Rosenberg et al., 2005; Van Asch & Daelemans, 2016), and has recently been introduced into semi-supervised node classification. For making full use of supervised information to improve the prediction accuracy, Li et al. (2018) proposed to improve GCN model by self-training mechanism, which trains and applies a base model in rounds, and adds nodes with high confidences as supervision after each round. The newly added nodes are expect to be beneficial to predict rest nodes so as to enhance the final performance of the model. Following this line, the M3S training algorithm Sun et al. (2019) pretrains a model over the labeled data, and then assigns pseudo-labels to highly confident unlabeled samples that are considered as labeled data for the next round of the training. Later, Zhou et al. (2019) proposed a dynamic self-training GCN that generalizes and simplifies previous by directly using the output of GCN without a checking part to continuously refresh the training set. Similarly, Yang et al. (2020) proposed self-enhanced GNN (SEG) to improve the quality of the input data using the outputs of existing GNN models. These self-training methods expand the labeled node set with relatively simple checking mechanism or even directly using the output of GCN, as a result they may introduce noise as supervision and thus hurt the final prediction performance.
3 PRELIMINARIES
An attributed relational graph of n nodes can be represented by G = (V,E,X), where V = {v1, v2, ..., vn} denotes the set of n nodes, and E = {eij} is the edge set. X = {x1, x2, ...xn} ∈ Rn×d is the set of attributes of all nodes, where xi is the d-dimensional attribute vector associated with node vi. Adjacency matrix A = {aij} ∈ Rn×n denotes the topological structure of graph G, where aij > 0 if there is an edge eij between node vi and vj and aij = 0 otherwise.
For semi-supervised node classification, the node set V can be split into a labeled node set VL ∈ V with attributes XL ∈ X and an unlabeled one VU = V \VL with attributes XU = X\XL. We assume each node belongs to exactly one class, and denote YL = {yi} the ground-truth labels of node set VL where yi is the class label of node vi ∈ VL. The aim of semi-supervised node classification is to learn a classifier from the graph and known node labels YL, and use it to predict labels for unlabeled nodes VU . Define a classifier fθ : (ỸL, ỸU ) ← fθ(X,A, YL), where θ denotes the parameters of model, ỸL and ỸU are the predicted labels of nodes VL and VU respectively. In general, we want the predict labels ỸL is close to the ground-truth labels YL as possible in favor of
θ∗ = argmin θ d(ỸL, YL) = argmin θ d(fθ(X,A, YL), YL), (1)
where d(·, ·) is a distance measure between two label sets. In recent years GCN has become a popular model for semi-supervised node classification, which aggregates a structural feature for each node and use the formed features, rather than the initial attributes X , for label prediction.
4 THE PROPOSED METHOD
In this section, we will elaborate our proposed framework, namely feature aggregation self-training GCN (FASG), for semi-supervised node classification. Firstly, we do analysis of pseudo-labels to explain the importance of checking part in self-training framework. Secondly, we illustrate the design of checking part in our framework and show the superiority of our checking part in graph networks. Then we elaborate every part of the framework and display the FASG training algorithm. Finally we integrate our framework with various GNN models.
4.1 ANALYSIS OF PSEUDO-LABELS
It is common for existing self-training GCN models to assign pseudo-labels to highly confident nodes and expand them as supervised information. Therefore, the quality of the generated pseudolabels is crucial for node classification and the wrongly introduced supervision information may hurt the final prediction performance. Table 1 summarizes the prediction accuracy of the GCN model (Kipf & Welling, 2016) on Cora when it is trained given different ratios of falsely labeled nodes. It is shown that the accuracy decreases significantly with the ratio of bad training nodes increasing.
4.2 CHECKING PART WITH FEATURE AGGREGATION
To guarantee the quality of the generated pseudo-labels, we develop a delicate checking part in the assistance of feature aggregation. The implementation of feature aggregation can be described as Xaggre = D̃−1ÃX , where D is digree matrix of the graph, D̃ = D + I , Ã = A+ I . We use deep graph library DGL (Wang et al., 2019) to implement feature aggregation.
For illustration of the effectiveness of feature aggregation, we apply t-SNE (Maaten & Hinton, 2008) to visualize the aggregated features of each node on the Cora dataset in Fig 1, where feati denotes the features aggregated from the original features for i times. As shown in Fig. 1(a), the original node features are mixed together and are difficult to distinguish. As the fusion of node features going deeper from feat1 to feat4, nodes with the same label tend to aggregate into clusters in 2-D space. However, the cluster boundaries become blur again after the aggregation goes up to a certain level, e.g.. feat15 and feat20.
Furthermore, we apply linear svm (Cortes & Vapnik, 1995) on aggregated features feat5 to form a classifier, and report its performance in Table 2 in comparison with several GCN models on three citation networks, Cora, CiteSeer and PubMed. Clearly, this relatively simple classifier is able to achieve comparable performance with popular GCN models due to the representation ability of aggregated features.
As for the self-training mechanism, we employ the above classifier that combines feature aggregation with line svm to serve as check part for generation of pseudo-labels of nodes. In Fig. 2, we compare the quality of pseudo-labels generated by different checking mechanisms including plain self-training method (Li et al., 2018), deep cluster Sun et al. (2019) and the proposed checking part with feature aggregation. It is shown that our method introduces less bad training nodes than the compared methods in different label rates on both Cora and CiteSeer, which accounts for the better performance on node classification shown in Sec. 5.
4.3 MULTI-STAGE SELF-TRAINING FRAMEWORK
The overall framework of the proposed feature aggregation self-training GCN (FASG) algorithm is illustrated in Fig. 3. Instead of using deep cluster and aligning mechanism, we firstly apply feature aggregation and linear SVM classifier to build a checking part. After each training round we use
both the output GCN confidence and the checking part to choose reliable nodes as supervised ones at the next round. The training iterates K rounds and then output the final predictions of unlabeled nodes.
The proposed FASG algorithm is described in details in Algo. 1. At the beginning, we concatenate features from feat0 to feat10 and put them into a linear SVM to build the checking part. At each round if the output of a node predicted by the GCN model is consistent with its pseudo-label generated by the checking part, then we tend to expand this node with high certainty to the supervised set. To avoid expanding too much nodes at one time, only t nodes with top confidence are checked at each round. Note that the base GCN model in Algo. 1 is not specified, i.e. the proposed FASG algorithms can be integrated with various GCN models to boost node classification, of which the effectiveness is validated in Table. 6.
Algorithm 1 The FASG Algorithm Input:
G = (V,E,X): the input graph. A: the adjacent matrix of graph G. L,U : the labeled and unlabeled node set of respectively. GCNconv(·): the base GCN model. SVM(·): the linear SVM classifier. K: the number of self-training rounds.
Output: predictions of all the unlabeled nodes ỸU ;
1: Conduct feature aggregation to generate feat1, . . . , feat9. 2: Form concatenation feat← [feat1, . . . , feat9]. 3: Generate pseudo-labels Y ′U = SVM(feat, L, U). 4: Let L′ ← L, U ′ ←= U ; 5: for k = 1 to K do 6: Train GCN model and get predictions and confidence matix:
ỸU ,M = GCNconv(A,X,L ′, U ′).
7: for each class j do 8: Select t nodes {vj1, . . . , vjt} in U ′ with top confidences. 9: for i = 1 to t do
10: if ỹji equals y′ji then 11: L′ ← L′ ∪ {vji}, U ′ ← U ′\{vji}. 12: end if 13: end for 14: end for 15: end for 16: Compute the final predictions ỸU = GCNconv(A,X,L′, U ′). 17: return ỸU
5 EXPERIMENTS
5.1 EXPERIMENTAL SETTINGS
We conduct experiments on three open graph datasets derived from citation networks (including Cora, CiteSeer, PubMed) (Sen et al., 2008) for the semi-supervised node classification task. In these citation networks nodes denote documents whose features are formed by bag-of-words representations, and edges denote their relationships with labels indicating what field the corresponding documents belong to.
Though our framework can be integrated with various GNN models, we choose plain GCN (Kipf & Welling, 2016) as the base model in Algo. 1 in this experiment. Specifically, we set the number of GCN layers n layers=2, learning rate lr=1e-2, training epochs=600, weight decay=5e-4 for the GCN model, and fix t = 10 in Algo. 1. Similar to the M3S algorithm (Sun et al., 2019), we also regard the option of rounds K as a hyper-parameter and assign the most suitable K for each testing of different label rate. We choose K as 40,10,5,4,4 for Cora dataset, 30,25,15,10,10 for CiteSeer and 5,4,3 for PubMed. The label rate indicates the amount of labeled nodes, which are randomly chosen from the whole node set under an extra measures that is to guarantee the balance between different classes. For each trial we repeat the testing 10 times and report the mean accuracy.
5.2 COMPARISON WITH BASELINE ALGORITHMS
The compared baseline algorithms in this experiment include traditional learning method such as Node2Vec (Grover & Leskovec, 2016) ,LP (Wu et al., 2012) and classic GNN approach such as GCN (Kipf & Welling, 2016), GAT (Veličković et al., 2017) and ChebNet (Defferrard et al., 2016). We also include Co-training and Self-training proposed by Li et al. (2018) and other self-learning based approaches MultiStage (Sun et al., 2019), M3S (Sun et al., 2019) as baseline. The relevant experimental settings and results are all taken from original papers.
The comparison of these algorithms on the three benchmarks is summarized in Tables 3, 4 and 5 respectively. It is observed that GNN-based approaches surpass traditional learning approaches in general on all three datasets. By adopting multi-rounds training strategy and expanding the supervised information iteratively, the algorithms based on self-training mechanism achieve remarkable improvement in prediction accuracy, especially when the label rate is quite small. Furthermore, the proposed FASG algorithm outperforms all baseline algorithms in all tested scenarios. The superiority of our method derives from the delicate checking part based feature aggregation, which is able to guarantee the high quality of the expanded supervised information as illustrated in Fig. 2.
5.3.1 THE NUMBER OF TRAINING ROUNDS
5.3 ABLATION STUDIES
In order to reveal how our algorithm is affected by the number of training rounds K, we report the numbers of newly added nodes, bad training nodes and the prediction accuracy on the CiteSeer dataset for different label rates with increasingK from 0 to 50. Note that whenK is 0, the framework degrades to the plain GCN model. As shown in Fig. 4(a), accuracies grow rapidly during the first few rounds for all label rates. For a small label rate (e.g. 0.005), the accuracy tends to grow continuously withK increasing. On the contrary, for a relatively large label rate (e.g. 0.04) the accuracy will reach the peak rapidly with a small K and saturate afterward.
Fig. 4(b) shows the number of newly added nodes after each training round, which is consistent with the change of the accuracy in Fig. 4(a). There are numbers of newly nodes that are expanded as supervision information at each round for a small label rate, so the accuracy is improved continuously. While, for a relatively large label rate, the number of newly added nodes drops markedly after a few training rounds, which results in the saturation of the accuracy.
5.3.2 INTEGRATION WITH DIFFERENT BASE GCN MODELS
As described in Sec. 4.3, the proposed FASG algorithm can be integrated with various base GCN models to improve their prediction performances. For validation, we combine FASG with several popular GCN models, and report their prediction accuracy on the CiteSeer dataset with label rate 0.5% in Table 6, where GS-M and GS-P represent GraphSage with mean and maxpool aggregator respectively. It is shown that all tested base GCN models achieve similar performances, and they are all benefitted significantly in prediction accuracy by applying our FASG to expand supervised information iteratively.
6 CONCLUSION
In this paper, we firstly analyzed the limitations of plain GCN models in dealing with semisupervised node classification tasks, and subsequently proposed a feature aggregation self-training GCN algorithm aiming to improve the prediction accuracy. Our algorithm iteratively expand reliable nodes into the supervised set by checking both the GCN outputs and the pseudo-labels of nodes that are generated through applying a linear SVM classifier on the aggregated features. This checking mechanism is able to provide supervised information of better quality than previous methods and boosts the final node classification significantly. In experiments, the proposed algorithm outperforms state-of-the-art baseline algorithms in general on all tested benchmarks, especially when the ratio of labeled nodes is quite small. | 1. What is the focus of the paper regarding node classification in graph neural networks?
2. What are the strengths of the proposed approach, particularly in utilizing SVM?
3. What are the weaknesses of the paper, especially regarding experiment descriptions and comparisons with other works?
4. Do you have any concerns about the choice of SVM over GAT?
5. How does the reviewer assess the novelty and significance of the proposed method? | Review | Review
The paper proposes an algorithm combining SVM and GCN to solve the node classification problem in label-less scenarios. The proposed model uses a self-training mechanism to generate labels and features and integrates SVM to improve the confidence level of the labels.
Strong points: -It first uses relationship information among nodes for training an SVM. The algorithm based on the SVM-checking mechanism achieves remarkable improvement in prediction accuracy, especially when the label rate is quite small.
Weak points:
The description in the SVM classifier experiment section is not clear and detailed enough. How does it deal with different multi-class classification problems? Perhaps, it should verify whether updating the SVM with pseudo-labels can improve the performance of the model in the case of low labeling rates.
According to the analysis in the paper, GAT and SVM have similar performance. Why did the authors choose to use SVM as the checking part rather than GAT? Some experiments may be required to show the advantages of using SVM checking.
The proposed method should be compared with some previous SOTA methods (e.g., Union and Intersection method proposed in "Deeper Insights into Graph Convolutional Networks for Semi-Supervised Learning." (2018).)
The proposed method is not innovative enough, because it just employs a different check algorithm. |
ICLR | Title
FASG: Feature Aggregation Self-training GCN for Semi-supervised Node Classification
Abstract
Recently, graph convolutioal networks (GCNs) have achieved significant success in many graph-based learning tasks, especially for node classification, due to its excellent ability in representation learning. Nevertheless, it remains challenging for GCN models to obtain satisfying predictions on graphs where few nodes are with known labels. In this paper, we propose a novel self-training algorithm based on GCN to boost semi-supervised node classification on graphs with little supervised information. Inspired by self-supervision strategy, the proposed method introduces an ingenious checking part to add new nodes as supervision after each training epoch to enhance node prediction. In particular, the embedded checking part is designed based on aggregated features, which is more accurate than previous methods and boosts node classification significantly. The proposed algorithm is validated on three public benchmarks in comparison with several state-of-the-art baseline algorithms, and the results illustrate its excellent performance.
1 INTRODUCTION
Graph convolutional network (GCN) can be seen as the migration of convolutional neural network (CNN) on non-Euclidean structure data. Due to its its excellent ability in representation learning, GCN has achieved significant success in many graph-based learning tasks, including node clustering, graph classification and link prediction (Dwivedi et al., 2020). Kipf & Welling (2016) proposed a GCN mode from the perspective of spectrogram theory and validated its effectiveness on semi-supervised node classification task. Subsequent models such as GraphSAGE (Hamilton et al., 2017), GAT (Veličković et al., 2017), SGCN (Wu et al., 2019) and APPNP (Klicpera et al., 2018) designed more sophisticated neighborhood aggregation functions from spatial or spectral views. These methods obtain much more effective results on semi-supervised node classification than traditional methods such as MLP, DeepWalk (Perozzi et al., 2014), etc. However, the prediction accuracy of such GCN models depends largely on the quantity and quality of supervised information, and it will decrease significantly when the quantity of labeled nodes is quite small (Li et al., 2018). The main reason lies that scarce supervised information is difficult to be spread far away in the graph so that unlabeled nodes are hardly to make full use of supervised information for prediction.
Addressing the above issue, many studies have been devoted to improving the representation ability by designing multi-layer GCN model (Li et al., 2019). However, the representation ability of GCN, as illustrated in Kipf & Welling (2016), can hardly be improved by simply stacking layers just like MLP. Moreover, stacking too many layers tends to cause over-smoothing (Xu et al., 2018) that makes all node embeddings indistinguishable. Alternatively, Li et al. (2018) proposed to improve the reasoning ability of GCN models by applying self-training techniques on the training. Rather than trying to enhance the expressive ability of the model, the self-training strategy prefers to expand the supervised information by adding unlabeled nodes with high confidences to the training set at each round. Following this line, Sun et al. (2019) proposed a multi-stage self-training strategy (M3S) to enrich the training set, which uses deep cluster (Caron et al., 2018) and an aligning mechanism to generate pseudo-labels of nodes for updating of the training set. Later, Zhou et al. (2019) proposed a dynamic self-training framework to continuously refresh the training set by directly using the output of GCN without a checking part. In general these self-training algorithms generate pseudolabels using relatively simple checking mechanism, which may introduce false labels as supervision information and prevent the improvement of prediction accuracy.
In this paper, we propose a novel feature aggregation self-training GCN (FASG) algorithm for semisupervised node classification. We firstly propose a lightweight classifier that applies linear SVM on aggregated node features, and validate that it achieves comparable performance to popular GCN approaches. Furthermore, this classifier is served as a checking part in the multi-round training process to generate pseudo-labels, which are used to filter unreliable nodes when expanding the supervised information. By fully considering the structural information of graph nodes, the newly developed checking part is able to improve the accuracy of the generated pseudo-labels and finally boost the node classification. Finally, we illustrate that the proposed self-training strategy can be integrated with various existing GCN models to improve the prediction performance.
The proposed algorithms is validated in comparison with several state-of-the-art baseline algorithms in three public benchmarks, and the experimental results illustrate that the proposed algorithm outperforms all compared algorithms in general on all benchmarks. We will release the source code upon publication of this paper.
2 RELATED WORK
In the past decade CNN has achieved great success in many areas of machine learning (Krizhevsky et al., 2012; LeCun et al., 1998; Sermanet et al., 2012), but its applications are mainly restricted in dealing with Euclidean structure data (Bruna et al., 2013). Consequently, in recent years more and more studies are devoted to learning the representation on non-Euclidean structure data such as graph.
Graph neural network (GNN) plays an important role in the field of graph representation learning, which can learn the representation of nodes or the whole graph. There are many famous GNN architectures including GCN (Kipf & Welling, 2016), graph recurrent neural network (Hajiramezanali et al., 2019) and graph autoencoder (Pan et al., 2018). As one of the most important architecture of GNN, GCN can be roughly categorized into spectral and spatial approaches. The spectral approaches (Bruna et al., 2013) define convolution operation by Laplacian feature decomposition of the graph, thereby filtering the graph structure in the spectral domain. On the basis of the Chebyshev polynomial (Defferrard et al., 2016) of the graph Laplacian matrix, Kipf & Welling (2016) proposed a much simper GCN framework that limits the filter to the first-order neighbor around each node. On the other hand, spatial approaches implement convolution in spatial domain by defining aggregation functions and transform functions. Notable work includes GraphSAGE (Hamilton et al., 2017) that transformed representation learning into a formal pattern called aggregation and combination and proposed several effective aggregation strategies such as mean-aggregator and max-aggregator, and GAT (Veličković et al., 2017) that focuses on the diversity in connected nodes and leverages selfattention mechanism to learn the important information in neighborhoods. Although these models have achieved far better performance on node classification than traditional methods, they still suffer from scarce supervised information due to the limitation on GCN layers making it hard to transform the supervised information to the entire graph.
Self-training is an ancient and classic topic in the NLP field before deep learning era (Hearst, 1991; Riloff et al., 1999; Rosenberg et al., 2005; Van Asch & Daelemans, 2016), and has recently been introduced into semi-supervised node classification. For making full use of supervised information to improve the prediction accuracy, Li et al. (2018) proposed to improve GCN model by self-training mechanism, which trains and applies a base model in rounds, and adds nodes with high confidences as supervision after each round. The newly added nodes are expect to be beneficial to predict rest nodes so as to enhance the final performance of the model. Following this line, the M3S training algorithm Sun et al. (2019) pretrains a model over the labeled data, and then assigns pseudo-labels to highly confident unlabeled samples that are considered as labeled data for the next round of the training. Later, Zhou et al. (2019) proposed a dynamic self-training GCN that generalizes and simplifies previous by directly using the output of GCN without a checking part to continuously refresh the training set. Similarly, Yang et al. (2020) proposed self-enhanced GNN (SEG) to improve the quality of the input data using the outputs of existing GNN models. These self-training methods expand the labeled node set with relatively simple checking mechanism or even directly using the output of GCN, as a result they may introduce noise as supervision and thus hurt the final prediction performance.
3 PRELIMINARIES
An attributed relational graph of n nodes can be represented by G = (V,E,X), where V = {v1, v2, ..., vn} denotes the set of n nodes, and E = {eij} is the edge set. X = {x1, x2, ...xn} ∈ Rn×d is the set of attributes of all nodes, where xi is the d-dimensional attribute vector associated with node vi. Adjacency matrix A = {aij} ∈ Rn×n denotes the topological structure of graph G, where aij > 0 if there is an edge eij between node vi and vj and aij = 0 otherwise.
For semi-supervised node classification, the node set V can be split into a labeled node set VL ∈ V with attributes XL ∈ X and an unlabeled one VU = V \VL with attributes XU = X\XL. We assume each node belongs to exactly one class, and denote YL = {yi} the ground-truth labels of node set VL where yi is the class label of node vi ∈ VL. The aim of semi-supervised node classification is to learn a classifier from the graph and known node labels YL, and use it to predict labels for unlabeled nodes VU . Define a classifier fθ : (ỸL, ỸU ) ← fθ(X,A, YL), where θ denotes the parameters of model, ỸL and ỸU are the predicted labels of nodes VL and VU respectively. In general, we want the predict labels ỸL is close to the ground-truth labels YL as possible in favor of
θ∗ = argmin θ d(ỸL, YL) = argmin θ d(fθ(X,A, YL), YL), (1)
where d(·, ·) is a distance measure between two label sets. In recent years GCN has become a popular model for semi-supervised node classification, which aggregates a structural feature for each node and use the formed features, rather than the initial attributes X , for label prediction.
4 THE PROPOSED METHOD
In this section, we will elaborate our proposed framework, namely feature aggregation self-training GCN (FASG), for semi-supervised node classification. Firstly, we do analysis of pseudo-labels to explain the importance of checking part in self-training framework. Secondly, we illustrate the design of checking part in our framework and show the superiority of our checking part in graph networks. Then we elaborate every part of the framework and display the FASG training algorithm. Finally we integrate our framework with various GNN models.
4.1 ANALYSIS OF PSEUDO-LABELS
It is common for existing self-training GCN models to assign pseudo-labels to highly confident nodes and expand them as supervised information. Therefore, the quality of the generated pseudolabels is crucial for node classification and the wrongly introduced supervision information may hurt the final prediction performance. Table 1 summarizes the prediction accuracy of the GCN model (Kipf & Welling, 2016) on Cora when it is trained given different ratios of falsely labeled nodes. It is shown that the accuracy decreases significantly with the ratio of bad training nodes increasing.
4.2 CHECKING PART WITH FEATURE AGGREGATION
To guarantee the quality of the generated pseudo-labels, we develop a delicate checking part in the assistance of feature aggregation. The implementation of feature aggregation can be described as Xaggre = D̃−1ÃX , where D is digree matrix of the graph, D̃ = D + I , Ã = A+ I . We use deep graph library DGL (Wang et al., 2019) to implement feature aggregation.
For illustration of the effectiveness of feature aggregation, we apply t-SNE (Maaten & Hinton, 2008) to visualize the aggregated features of each node on the Cora dataset in Fig 1, where feati denotes the features aggregated from the original features for i times. As shown in Fig. 1(a), the original node features are mixed together and are difficult to distinguish. As the fusion of node features going deeper from feat1 to feat4, nodes with the same label tend to aggregate into clusters in 2-D space. However, the cluster boundaries become blur again after the aggregation goes up to a certain level, e.g.. feat15 and feat20.
Furthermore, we apply linear svm (Cortes & Vapnik, 1995) on aggregated features feat5 to form a classifier, and report its performance in Table 2 in comparison with several GCN models on three citation networks, Cora, CiteSeer and PubMed. Clearly, this relatively simple classifier is able to achieve comparable performance with popular GCN models due to the representation ability of aggregated features.
As for the self-training mechanism, we employ the above classifier that combines feature aggregation with line svm to serve as check part for generation of pseudo-labels of nodes. In Fig. 2, we compare the quality of pseudo-labels generated by different checking mechanisms including plain self-training method (Li et al., 2018), deep cluster Sun et al. (2019) and the proposed checking part with feature aggregation. It is shown that our method introduces less bad training nodes than the compared methods in different label rates on both Cora and CiteSeer, which accounts for the better performance on node classification shown in Sec. 5.
4.3 MULTI-STAGE SELF-TRAINING FRAMEWORK
The overall framework of the proposed feature aggregation self-training GCN (FASG) algorithm is illustrated in Fig. 3. Instead of using deep cluster and aligning mechanism, we firstly apply feature aggregation and linear SVM classifier to build a checking part. After each training round we use
both the output GCN confidence and the checking part to choose reliable nodes as supervised ones at the next round. The training iterates K rounds and then output the final predictions of unlabeled nodes.
The proposed FASG algorithm is described in details in Algo. 1. At the beginning, we concatenate features from feat0 to feat10 and put them into a linear SVM to build the checking part. At each round if the output of a node predicted by the GCN model is consistent with its pseudo-label generated by the checking part, then we tend to expand this node with high certainty to the supervised set. To avoid expanding too much nodes at one time, only t nodes with top confidence are checked at each round. Note that the base GCN model in Algo. 1 is not specified, i.e. the proposed FASG algorithms can be integrated with various GCN models to boost node classification, of which the effectiveness is validated in Table. 6.
Algorithm 1 The FASG Algorithm Input:
G = (V,E,X): the input graph. A: the adjacent matrix of graph G. L,U : the labeled and unlabeled node set of respectively. GCNconv(·): the base GCN model. SVM(·): the linear SVM classifier. K: the number of self-training rounds.
Output: predictions of all the unlabeled nodes ỸU ;
1: Conduct feature aggregation to generate feat1, . . . , feat9. 2: Form concatenation feat← [feat1, . . . , feat9]. 3: Generate pseudo-labels Y ′U = SVM(feat, L, U). 4: Let L′ ← L, U ′ ←= U ; 5: for k = 1 to K do 6: Train GCN model and get predictions and confidence matix:
ỸU ,M = GCNconv(A,X,L ′, U ′).
7: for each class j do 8: Select t nodes {vj1, . . . , vjt} in U ′ with top confidences. 9: for i = 1 to t do
10: if ỹji equals y′ji then 11: L′ ← L′ ∪ {vji}, U ′ ← U ′\{vji}. 12: end if 13: end for 14: end for 15: end for 16: Compute the final predictions ỸU = GCNconv(A,X,L′, U ′). 17: return ỸU
5 EXPERIMENTS
5.1 EXPERIMENTAL SETTINGS
We conduct experiments on three open graph datasets derived from citation networks (including Cora, CiteSeer, PubMed) (Sen et al., 2008) for the semi-supervised node classification task. In these citation networks nodes denote documents whose features are formed by bag-of-words representations, and edges denote their relationships with labels indicating what field the corresponding documents belong to.
Though our framework can be integrated with various GNN models, we choose plain GCN (Kipf & Welling, 2016) as the base model in Algo. 1 in this experiment. Specifically, we set the number of GCN layers n layers=2, learning rate lr=1e-2, training epochs=600, weight decay=5e-4 for the GCN model, and fix t = 10 in Algo. 1. Similar to the M3S algorithm (Sun et al., 2019), we also regard the option of rounds K as a hyper-parameter and assign the most suitable K for each testing of different label rate. We choose K as 40,10,5,4,4 for Cora dataset, 30,25,15,10,10 for CiteSeer and 5,4,3 for PubMed. The label rate indicates the amount of labeled nodes, which are randomly chosen from the whole node set under an extra measures that is to guarantee the balance between different classes. For each trial we repeat the testing 10 times and report the mean accuracy.
5.2 COMPARISON WITH BASELINE ALGORITHMS
The compared baseline algorithms in this experiment include traditional learning method such as Node2Vec (Grover & Leskovec, 2016) ,LP (Wu et al., 2012) and classic GNN approach such as GCN (Kipf & Welling, 2016), GAT (Veličković et al., 2017) and ChebNet (Defferrard et al., 2016). We also include Co-training and Self-training proposed by Li et al. (2018) and other self-learning based approaches MultiStage (Sun et al., 2019), M3S (Sun et al., 2019) as baseline. The relevant experimental settings and results are all taken from original papers.
The comparison of these algorithms on the three benchmarks is summarized in Tables 3, 4 and 5 respectively. It is observed that GNN-based approaches surpass traditional learning approaches in general on all three datasets. By adopting multi-rounds training strategy and expanding the supervised information iteratively, the algorithms based on self-training mechanism achieve remarkable improvement in prediction accuracy, especially when the label rate is quite small. Furthermore, the proposed FASG algorithm outperforms all baseline algorithms in all tested scenarios. The superiority of our method derives from the delicate checking part based feature aggregation, which is able to guarantee the high quality of the expanded supervised information as illustrated in Fig. 2.
5.3.1 THE NUMBER OF TRAINING ROUNDS
5.3 ABLATION STUDIES
In order to reveal how our algorithm is affected by the number of training rounds K, we report the numbers of newly added nodes, bad training nodes and the prediction accuracy on the CiteSeer dataset for different label rates with increasingK from 0 to 50. Note that whenK is 0, the framework degrades to the plain GCN model. As shown in Fig. 4(a), accuracies grow rapidly during the first few rounds for all label rates. For a small label rate (e.g. 0.005), the accuracy tends to grow continuously withK increasing. On the contrary, for a relatively large label rate (e.g. 0.04) the accuracy will reach the peak rapidly with a small K and saturate afterward.
Fig. 4(b) shows the number of newly added nodes after each training round, which is consistent with the change of the accuracy in Fig. 4(a). There are numbers of newly nodes that are expanded as supervision information at each round for a small label rate, so the accuracy is improved continuously. While, for a relatively large label rate, the number of newly added nodes drops markedly after a few training rounds, which results in the saturation of the accuracy.
5.3.2 INTEGRATION WITH DIFFERENT BASE GCN MODELS
As described in Sec. 4.3, the proposed FASG algorithm can be integrated with various base GCN models to improve their prediction performances. For validation, we combine FASG with several popular GCN models, and report their prediction accuracy on the CiteSeer dataset with label rate 0.5% in Table 6, where GS-M and GS-P represent GraphSage with mean and maxpool aggregator respectively. It is shown that all tested base GCN models achieve similar performances, and they are all benefitted significantly in prediction accuracy by applying our FASG to expand supervised information iteratively.
6 CONCLUSION
In this paper, we firstly analyzed the limitations of plain GCN models in dealing with semisupervised node classification tasks, and subsequently proposed a feature aggregation self-training GCN algorithm aiming to improve the prediction accuracy. Our algorithm iteratively expand reliable nodes into the supervised set by checking both the GCN outputs and the pseudo-labels of nodes that are generated through applying a linear SVM classifier on the aggregated features. This checking mechanism is able to provide supervised information of better quality than previous methods and boosts the final node classification significantly. In experiments, the proposed algorithm outperforms state-of-the-art baseline algorithms in general on all tested benchmarks, especially when the ratio of labeled nodes is quite small. | 1. What is the focus of the paper regarding semi-supervised node classification on graphs?
2. What are the strengths and weaknesses of the proposed self-training algorithm based on GCN?
3. Do you have any questions regarding the presentation of the approach, specifically concerning inconsistent statements and lacking details?
4. How does the reviewer assess the novelty of the presented method?
5. What are some writing errors that need to be corrected in the paper? | Review | Review
This paper presents a self-training algorithm based on GCN to improve the semi-supervised node classification on graphs. The key idea is to add new nodes with high confidence as supervision to enlarge the labeled nodes. Although the experimental results show the proposed method outperforms or performs similarly to baseline methods, the paper has several weaknesses. First the presented approach is not clearly introduced, with inconsistent statements on building the checking part, and lack of details on how to calculate the confidence to add the new nodes. Second, the novelty of the presented approach is limited, as adding unlabeled samples with high confidence is not a novel idea. Third, the paper writing should be improved, as there are errors.
In Table 2, GAT has better or similar performance comparing to feat5+SVM. Why not using GAT feature aggregation for building the checking part? And, in page 5, it says “At the beginning, we concatenate features from feat0 to feat10 and put them into a linear SVM to build the checking part. ”. So, it is confusing how the checking part is built. In Algorithm 1, feat1 to feat9 were concatenated in line 2. The statements are not consistent in the whole paper.
In the line 6 of Algorithm 1, Train GCN model and get predictions and confidence matix: How the confidence is calculated?
There are writing errors to correct, such as Due to its its excellent, confidence matix: |
ICLR | Title
Human-AI Coordination via Human-Regularized Search and Learning
Abstract
We consider the problem of making AI agents that collaborate well with humans in partially observable fully cooperative environments given datasets of human behavior. Inspired by piKL, a human-data-regularized search method that improves upon a behavioral cloning policy without diverging far away from it, we develop a three-step algorithm that achieve strong performance in coordinating with real humans in the Hanabi benchmark. We first use a regularized search algorithm and behavioral cloning to produce a better human model that captures diverse skill levels. Then, we integrate the policy regularization idea into reinforcement learning to train a human-like best response to the human model. Finally, we apply regularized search on top of the best response policy at test time to handle outof-distribution challenges when playing with humans. We evaluate our method in two large scale experiments with humans. First, we show that our method outperforms experts when playing with a group of diverse human players in ad-hoc teams. Second, we show that our method beats a vanilla best response to behavioral cloning baseline by having experts play repeatedly with the two agents.
1 INTRODUCTION
One of the most fundamental goals of artificial intelligence research, especially multi-agent research, is to produce agents that can successfully collaborate with humans to achieve common goals. Although search and reinforcement learning (RL) from scratch without human knowledge have achieved impressive superhuman performance in competitive games (Silver et al., 2017; Brown & Sandholm, 2019), prior works (Hu et al., 2020; Carroll et al., 2019) have shown that agents produced by vanilla multi-agent reinforcement learning do not collaborate well with humans.
A canonical way to obtain agents that collaborate well with humans is to first use behavioral cloning (BC) (Bain & Sammut, 1996) to train a policy that mimics human behavior and then use RL to train a best response (BR policy) to the fixed BC policy. However, such an approach has a few issues. The BC policy is hardly a perfect representation of human play. It may struggle to mimic strong players’ performance without search (Jacob et al., 2022). The BC policy’s response to new conventions developed during BR training is also not well defined. Therefore, the BR policy may develop strategies that exploit those undefined behaviors and confuse humans and causes humans to diverge from routine behaviors or even quit the task because they believe the partner is non-sensible.
Recently, Jacob et al. (2022) introduced piKL, a search technique regularized towards BC policies learned from human data that can produce strong yet human-like policies. In some environments, the regularized search, with the proper amount of regularization, achieves better performance while maintaining or even improving its accuracy when predicting human actions. Inspired by piKL, we propose a three-step algorithm to create agents that can collaborate well with humans in complex partially observable environments. In the first step, we repeatedly apply imitation learning and piKL (piKL-IL) with multiple regularization coefficients to model human players of different skill levels. Secondly, we integrate the regularization idea with RL to train a human-like best response agent (piKL-BR) to the agents from step one. Thirdly and finally, at test time, we apply piKL on the trained best response agent to further improve performance. We call our method piKL3.
We test our method on the challenging benchmark Hanabi (Bard et al., 2020) through large-scale experiments with real human players. We first show that it outperforms human experts when partnering
with a group of unknown human players in an ad hoc setting without prior communication or warmup games. Players were recruited from a diverse player group and have different skill levels. We then evaluate piKL3 when partnered with expert human partners. We find that piKL3 outperforms an RL best response to a behavioral cloning policy (BR-BC) – a strong and established baseline for cooperative agents – in this setting.
2 RELATED WORK
The research on learning to collaborate with humans can be roughly categorized into two groups based on whether or not they rely on human data. With human data, the most straightforward method is behavioral cloning, which uses supervised learning to predict human moves and executes the move with the highest predicted probability. The datasets often contain sub-optimal decisions and mistakes made by humans and behavioral cloning inevitably suffers by training on such data. A few methods from the imitation learning and offline RL community have been proposed to address such issues. For example, conditioning the policy on a reward target (Kumar et al., 2019; Chen et al., 2021) can help guide the policy towards imitating the human behaviors that achieve the maximum future rewards at test time. Behavioral cloning with neural networks alone may struggle to model sufficiently strong humans, especially in complex games that require long-term planning (McIlroyYoung et al., 2020). Jacob et al. (2022) address this issue by regularizing search towards a behavioral cloning policy. The proposed method, piKL, not only improves the overall performance as most search methods do, but also achieves better accuracy when predicting human moves in a wide variety of games compared to the behavioral cloning policy on which it is based.
Human data can also be used in combination with reinforcement learning. Observationally Augmented Self-Play (OSP) (Lerer & Peysakhovich, 2019) augments the normal MARL training procedure with a behavioral cloning loss on a limited amount of data collected from a test time agent to find an equilibrium policy that may work well with that agent. OSP increases the probability of learning conventions that are compatible with the test time agents. However it may not be able to model partners with diverse skill levels given a large aggregation of data from various players. We can also use RL to train a best response policy to the behavioral cloning policy (Carroll et al., 2019). This method is the optimal solution given a perfect human model. In practice, however, RL is prone to overfitting to the imperfections of the human model. In addition, RL alone may not be sufficient in practice to learn superhuman strategies (Silver et al., 2018; Brown & Sandholm, 2019).
A parallel research direction seeks to achieve better human-AI coordination without using any human data. Some of them take inspiration from human behavior or the human learning process. Hu et al. (2020) enforce the RL policies to not break the symmetries in the game arbitrarily, a common practice of human players in certain games. Inspired by humans’ learning and reasoning process, Off-belief learning (Hu et al., 2021c) and K-level reasoning (Costa-Gomes & Crawford, 2006; Cui et al., 2021b) train sequences of policies with increasing cognitive capabilities. Both methods achieve strong performance with a human proxy model trained with behavioral cloning. Another group of methods use population-based training and various diversity metrics (Strouse et al., 2021; Lupu et al., 2021; Tang et al., 2021) to first obtain a set of different policies and then train a common best response that may generalize better to human partners than best response to a single RL policy.
3 BACKGROUND
3.1 DEC-POMDP AND DEEP REINFORCEMENT LEARNING
We consider human-AI coordination in a decentralized partially observable Markov decision process (Dec-POMDP) (Nayyar et al., 2013). A Dec-POMDP consists of N agents indexed by (1, . . . , N), a state space S, a joint action space A = A1 × · · · × AN , a transition function T : S × A → S, a reward function r : S × A → R and a set of observation function oi = Ωi(s), s ∈ S for each agent i. We further assume that the joint actions a and rewards r are observable by all agents. We then define the trajectory of true states until time step t as τt = (s0, a0, r0, . . . , st) and its partially observed counterpart (action-observation history, AOH) for agent i as τ it = (o i 0, a0, r0, . . . , oit). An agent’s policy πi(τ it ) = P (a i t|τ it ) maps each possible AOH to a distribution over the action space of
that agent. We use π to denote the joint policy of all agents and π−i to denote the joint policy of all other agents excluding agent i.
Deep multi-agent RL (MARL) has been successfully applied in many Dec-POMDP environments. Deep MARL algorithms often consist of a strong RL backbone such as (recurrent) DQN (Kapturowski et al., 2019) or PPO (Schulman et al., 2017) and additional modules such as a centralized value function (Yu et al., 2021), value-decomposition (Sunehag et al., 2017; Rashid et al., 2018) to handle challenges posed by having multiple agents. In this paper, we use recurrent DQN to train a best response to fixed policies. Specifically, a recurrent network is trained to model the expected total return for each action given the input AOH, Q(τ it , a) = Eτ∼P (τt|τ it )R(τt) where R(τt) = ∑ t′≥t γ
(t′−t)rt is the sum of discounted future reward by unrolling the joint policy π on the sampled true game trajectory until termination. The joint policy is the greedy policy derived from each agent’s Q-function.
3.2 SEARCH AND REGULARIZED SEARCH IN DEC-POMDP
Search has been critical to achieve superhuman performance in many games (Silver et al., 2018; Brown & Sandholm, 2019; Bakhtin et al., 2021). SPARTA (Lerer et al., 2020) and its faster and more generalized variant Learned Belief Search (Hu et al., 2021b) are competitive and efficient search algorithms in Dec-POMDPs. SPARTA assumes that a joint blueprint policy (BP) π has been agreed on beforehand. In single-agent SPARTA, one agent performs search at every time step assuming that their partners follow the BP. Specifically, the search agent i keeps track of the belief B(τ it ) = P (τt|τ it ,π−1), which is the distribution of the trajectory of states given the AOH and partners’ policies. It picks the action a′ that returns the highest sum of undiscounted future rewards assuming a′ is executed at time t and everyone follows the joint BP afterwards, i.e.
ait = argmax a Qπ(τ i t , a) = argmax a Eτt∼B(τ it )[r(τt, a) +Rπ(τt+1)], (1)
where r(τt, a) is the reward at time t after executing a and Rπ(τt+1) is the sum of future rewards following joint policy π. This notation assumes a deterministic transition function as the randomness can be absorbed into the belief function.
The belief tracking in SPARTA is computationally expensive and may have null support when partners deviate even slightly from the BP. Learned Belief Search (Hu et al., 2021a, LBS) mitigates those problems by using a neural network belief model B̂ trained with data generated by the BP with some exploration. For environments where the observation can be factorized into public and private parts, such as Hanabi, LBS also proposes to use a two-stream architecture where one stream with LSTM takes the public information as input while the other stream consists of only feed-forward layers takes the private information. The outputs of the two streams are fused to compute Q-values. This special architecture further reduces the computation cost as it no longer needs to re-unroll the LSTM from the beginning of the game for each sampled τt.
Although the learned belief technique was originally proposed to speed up SPARTA, it becomes critical in scenarios where the game trajectory does not exactly follow the assumed joint policy. For example, when playing with humans, humans’ real moves may differ from our model’s predictions and the learned belief model can often generalize well in those cases. In this paper we use LBS as the search component in piKL.
Jacob et al. (2022) show that the output policy of search algorithms can diverge quite far from the underlying blueprint policy used for rollouts and value estimation, which is undesirable in environments where collaborating with human partners is crucial. In general, they propose to sample actions following
P (a) ∝ πanc(a|τ it ) · exp [ Qπroll(τ i t , a)
λ
] , (2)
where Qπroll(τ i t , a) is the value output of a search algorithm like SPARTA, Monte Carlo tree search or regret matching using πroll as the blueprint policy for rollouts and/or value estimation, πanc(a|τ it ) is the anchor policy that we want our final policy to be close to and λ is a hyper-parameter controlling the degree of regularization. In fully cooperative games where mixed strategies are not necessary, Jacob et al. (2022) show that a greedy variant that select the argmax works better in practice.
4 METHOD
In this section we introduce piKL3. We first use piKL-IL with a probability distribution over the regularization parameter λ to model human players with varying skill levels. Then, we use RL regularized toward the behavioral cloning policy to train a human-compatible best response piKLBR. Finally, we use piKL-LBS at test time with high regularization toward the BR to fix severe mistakes when playing with real humans.
4.1 PIKL-IL FOR MODELING DIFFERENT LEVELS OF HUMAN PLAY
Algorithm 1 piKL-IL: modeling human with different skill levels. P (λ) can be a discrete uniform distribution over a set of values or over a set of Gaussian distributions centered around those values. piKL-LBS(λi, πroll, B̂, πanc) is a function to act following Eq. 2 or its greedy variant. It samples from the learned approximate belief model τt ∼ B̂(τ it ) to estimate Qπroll .
1: procedure PIKL-IL(πBC , P (λ), k, d) ▷ πBC : behavioral cloning policy trained from human data; ▷ P (λ) : distribution of λ ; ▷ k: number of iterations ▷ d: size of the dataset 2: πpiKL-IL ← πBC 3: for i← 1, . . . , k do 4: Train a belief model B̂ from self-play games of πpiKL-IL 5: Initialize dataset D = ∅ 6: while len(D) < d do 7: Sample λi ∼ P (λ) for every player independently 8: Generate a game g where player i follows piKL-LBS(λi, πpiKL-IL, B̂, πBC) 9: Add the game g to dataset D
10: end while 11: Train a new policy π′ with behavior cloning on D 12: πpiKL-IL ← π′ 13: end for 14: return πpiKL-IL 15: end procedure
PiKL-IL is a search-augmented imitation learning method. It first trains an imitation policy πBC via behavioral cloning (Bain & Sammut, 1996) on a dataset collected from the population of humans we want to model. Then piKL-IL iteratively improves a policy πpiKL-IL, alternating between generating higher quality data with piKL-LBS and training a better model using the generated dataset to produce a new πpiKL-IL. Each iteration of piKL-LBS uses πBC as the anchor policy πanc and πpiKL-IL as the rollout policy πroll in Eq. 2, so that we always anchor the generated data to never differ too much from human play, while using the best rollout policy so far to generate the next. The pseudocode is in Algorithm 1.
piKL was shown to maintain the same or even higher prediction accuracy on human moves while achieving much higher performance with certain λs, indicating that it may be better at modeling stronger human players (Jacob et al., 2022) than behavioral cloning. As λ becomes smaller, the prediction accuracy drops while the performance keeps increasing, moving closer to the unregularized search policy. We therefore use a distribution of λs to generate a spectrum of policies with strength ranging from average human players to exceptional policies that still resemble human behaviors reasonably well. When training a new policy on the generated data, we can condition the policy on the λ so that we can explicitly control the distribution of different skill levels in the subsequent iterations as well as in the piKL-BR of Section 4.2.
Theoretically, we should apply multi-agent piKL-LBS but it is too computationally demanding to generate enough data for imitation learning. Instead we run single-agent piKL-LBS with learned beliefs independently for both players. Prior work (Jacob et al., 2022) shows that although running piKL-LBS independently for more than one player lacks theoretical guarantees because both players are unsoundly assuming that the other player is playing according to the pre-search policy when both
players in fact play according to the post-search policy, the algorithm still achieves high performance since all policies are regularized towards the same πBC , thus the learned belief models are still in practice a good approximation of the true beliefs despite the shift in the underlying policies. For a theoretically sound version, we could alternatively run single-agent piKL-LBS on only one player and only collect training data only from that player’s perspective while fixing the other player to only play the pre-search policy that the learned belief assumes they will.
Once we have collected enough data, we can train a new model with imitation learning and proceed to a new iteration. The process can be repeated until πpiKL-IL stops improving. Note that the anchor policy is always the same human behavior-cloned policy to prevent the final policy from drifting away from human conventions.
The algorithm is presented here in the iterative form. However, if computational resources permit, it can be formulated as an asynchronous RL algorithm similar to AlphaZero (Silver et al., 2018) where πpiKL-IL is constantly trained with data from a replay buffer while many parallel workers generate games with piKL-LBS and add them to the buffer.
4.2 PIKL-BR FOR A HUMAN-LIKE BEST RESPONSE
A popular approach to human-AI coordination is to train a best response to a human model. This BR training is similar to standard single-agent RL in POMDP settings as the partners are fixed policies and thus can be viewed as part of the environment. In practice, this method has a few issues due to the imperfection of the human model as well as the overfitting problem in RL.
Given a normally distributed dataset in which the majority of the humans have intermediate skill levels, the vanilla behavioral-cloning model often converges to an intermediate average score in self-play. Moreover, as observed in Jacob et al. (2022), BC often nontrivially underperforms even the average of the players it is trained on. This can make it hard for the BR agent to learn the true best response to stronger-than-average players, or even to average players. We can address this problem by training a BR πpiKL-BR against the final piKL-IL policy πpiKL-IL instead of the original human behavior cloning policy πBC.
Similar to how single-agent RL can overfit to its exact training environment, an RL best response may overfit to its fixed neural partner, including finding unusual or out-of-distribution actions that happen to cause its partner to perform slightly better actions but that might not generalize to actual human players. In this case, instead of greedily picking an action that has slightly higher return as in normal RL, it would be better to err on the side of what humans tend to do in order to remain in-distribution. To address these issues, we propose to add piKL regularization during BR training.
Specifically, we train a policy πpiKL-BR to be a best response to πpiKL-IL via Q-learning, but we modify the Q-learning update as
Q(τ it , at)← rt(τt, a) + γ ·Q(τ it+1, a′t+1), (3) where a′t+1 = argmax
a [Q(τ it+1, a) + λ · log πBC(τ it+1, a)], (4)
and where the exploration policy is ϵ-Greedy[Q(τ it , a) + λ · log πBC(τ it , a)].The difference from the normal Q-learning is highlighted in red. At test time, ϵ is set to 0. The λ here can be set to a smaller value than that in piKL-IL because the main purpose is no longer modeling human moves but rather regularization and tie-breaking when multiple actions have small differences in expected return.
It is worth noting that if we run piKL-IL with the same small λ for many iterations, then the final πpiKL-IL with the small λ input converges to the same policy as piKL-BR. PiKL-BR is the amortized model-free version of piKL-IL and this step can be omitted if there are enough resources to run piKL-IL for enough iterations with additional λs.
4.3 PIKL-LBS FOR ROBUSTNESS AGAINST OOD ERRORS
The πpiKL-BR policy is a strong human-like policy that performs well with piKL-IL and humans who play similarly. When playing with a diverse group of human players in real life, however, it may suffer from out-of-distribution (OOD) errors when encountering trajectories that have low probability under training distributions. The actions produced by πpiKL-BR on OOD input sequences can be arbitrarily bad as the neural network has never been trained on such data.
However, search or other model-based planning algorithms can mitigate this problem by avoiding the most devastating mistakes, because it often takes only a few steps of simulated rollouts to directly observe the negative outcomes caused by those mistakes. Inspired by this observation, we run piKLLBS at test time on top of πpiKL-BR. We use πpiKL-BR for both of πanc and πroll in Eq. 2 when it is our turn, and assume the partner acts according to πpiKL-IL on their turn. The belief model is trained on data generated by cross-play between piKL-BR and piKL-IL. Since the main purpose of this step is to avoid catastrophic OOD errors that are usually associated with substantially lower Q-values, we can set λ high so the search policy stays close to πpiKL-BR in situations when the Q-values do not substantially differ.
5 EXPERIMENTAL SETUP
We implement and test our method in the Hanabi benchmark environment (Bard et al., 2020). In this section, we introduce the game rules of Hanabi, as well as how we implement piKL3 and a best response to πBC (BR-BC) baseline.
Hanabi is a 2 to 5 player card game. In this paper we use the standard 2-player version. The deck consists of five color suits and each suit has ten cards divided into five ranks with three 1s, two 2s, two 3s, two 4s and one 5. At the beginning, each player draws five cards from the shuffled deck. Players can see other players’ cards but not their own. On each turn, the active player can either hint a color or rank to another player or play or discard a card. Hinting a color or rank, will inform the recipient which cards in their hand have that specific color/rank. Hinting costs an information token and the team starts with eight tokens. The goal of the team to play exactly one card of each rank 1 to 5 of each color suit, in increasing order of rank, The order of plays between different color suits does not matter. A successful play scores the team one point while a failed play one costs one life. If all three lives are lost, the team will get 0 in this game, losing all collected points. The maximum score is 25 points. The team regains a hint token when a card is discarded or when a suit is finished (playing all 5 of a suit successfully). The player draws a new card after a play or discard move. Once the deck is exhausted, the game terminates after each player makes one more final move.
We acquire a dataset of roughly 240K 2-player Hanabi games from BoardGameArena1 to train the human policy πh. The dataset contains all the games played in a certain period on that online platform and we do not perform any filtering. The dataset is randomly split into a training set of 235K games, a validation set of 1K games and a test set of 4K games. The average score of games in the training set is 15.88. The policy πθ is parameterized by a Public-LSTM neural network (See Hu et al. (2021b) or Section 3.2). The policy is trained to minimize the cross-entropy loss L(θ) = −Eτ i∼D ∑ t πθ(a i t|τ it ). Note that it treats the AOH of each player τi as a separate data point for training. Similar to prior works (Hu et al., 2021c), we apply color shuffling Hu et al. (2020) for data augmentation. Every time we sample τi ∼ D, we generate a random permutation f of the five colors, e.g. f : a b c d e→ b d c a e, and apply f to both the input and the target of the training data. This model is trained with Adam (Kingma & Ba, 2014) optimizer until the prediction accuracy on the validation set peaks. The converged πh gets 19.72± 0.10 in self-play and 63.63% in prediction accuracy on the test set. In evaluation, we take the argmax from the policy instead of sampling, which also explains why it achieves higher average score than the training set.
PiKL-LBS requires a learned approximate belief model. In Hanabi, the belief model takes the same AOH τ i as the policy and returns a distribution over player i’s own hand. The hand consists of 5
1en.boardgamearena.com
cards and we can predict them sequentially from the oldest to the newest based on the time they are drawn. The belief network ϕ consists of an LSTM encoder to encode sequence of observations and an LSTM decoder for predicting cards autoregressively. Note that the belief is a function of partner’s policy. The belief for a given policy π partnering with ρ is trained with cross-entropy loss L(ϕ) = −Eτπ∼D(π,ρ) ∑ t ∑ j log pϕ(cj |τπt , c1, · · · .cj−1), where cj is the j-th card in hand to predict. D(π, ρ) is an infinite data stream generated by cross-play using π and ρ and τπ ∼ D means that we only use data from π’s perspective for training. In piKL-IL, we use π = ρ = πroll.
We set P (λ) to be a uniform mixture of truncated Gaussian distributions to model players of diverse skill levels. Specifically, we use Gaussian distributionsN (µ, σ2) truncated at 0 and 2µ with (µ, σ) = (1, 1/4), (2, 2/4), (5, 5/4), (10, 10/4) and each Gaussian is sampled with equal probability. We generate d = 250K games in each iteration to train the new policy and we find that one outer iteration (k = 1 in Algo. 1) is sufficient to achieve good performance in Hanabi. In every LBS step, we perform M = 10K Monte Carlo rollouts evenly distributed over |A| legal actions. We sample M/|A| valid private hands from the belief model to reset the simulator for rollouts. Invalid sampled hands are rejected. With this setting, each game with 2 player running piKL-LBS independently takes roughly 5 minutes with 1 GPU and we use 500 GPUs in parallel for 42 hours to generate the entire dataset. To better imitate policies under different λs, we feed the µ of the Gaussian distribution from which the λ is sampled to the policy network in the form of a one-hot vector concatenated with the input. The self-play performance of the piKL-IL model conditioning on different λ input is shown in the top row of Table 1. Clearly, piKL-IL performs significantly better than πh and the score increases as regularization λ decreases.
The BR is trained under a standard distributed RL setup where many parallel workers generate data with cross-play between the training policy and the fixed IL policy. The generated data is added into a prioritized replay buffer Schaul et al. (2015) and the training loop samples mini-batches of games to update the policy with TD errors. We use the same Public-LSTM architecture for the BR policy as it will also be used in piKL-LBS at test time. The BR policy explores with ϵ-greedy to a distribution of ϵ sampled every new game while the IL policy does not explore but it samples a new λ input from {1, 2, 5, 10} every game. The λ in Eq. 4 is set to 0.1. The cross-play performance between the converged piKL-BR and piKL-IL is shown in the bottom row of Table 1. As expected, piKL-BR is better at collaborating with piKL-IL than piKL-IL itself and the gap shrinks as the regularization λ decreases. The reasons are that piKL-BR is trained with lower regularization and RL can optimize for multi-step best response while search can only optimize for one step.
We run piKL-LBS on the piKL-BR policy with high regularization λ = 2. The search assumes that our blueprint is πbp and our partner always follows πIL, the final output policy of piKL-IL. To avoid predicting the λ input for partner model πIL, we replace it with an imitation learning policy π′IL trained on the same dataset as πIL but without the λ input. The belief model is trained the same way as in piKL-IL but with π = πBR and ρ = π′IL in D(π, ρ). The number of Monte Carlo rollouts per step is reduced to 5K to make it faster and suitable for real world testing.
Finally, we train an unregularized λ = 0 best response to the vanilla behavioral clone policy πBC as our baseline. This agent achieves 23.19 ± 0.03 in cross-play with πBC in convergence. This score is quite high considering that its partner πBC is much worse than the piKL-IL policy, indicating that the unreguarlized BR may be overfitting to the imperfect human model.
6 RESULTS
We carry out two large scale experiments with real humans to evaluate piKL3. The first experiment focuses on ad-hoc team play with a diverse group of players without any prior communication (zeroshot). In the second experiment, we invite a group of expert players to play multiple games with piKL3 and the BR-BC baseline in alternating order to further differentiate the gap between them.
The experiments are hosted on our customized version of the open sourced Hanab.Live platform2. The modified platform disables chat, observe and replay functionalities. Additionally, participants cannot create games themselves nor invite others to form a team. All games are created automatically following the design of the experiments below. We send each player an instruction document of the platform to make them familiar with the UI in advance.
2https://github.com/Hanabi-Live/hanabi-live
6.1 AD-HOC TEAM PLAY
In the first experiment, we recruited players with different skill levels from diverse sources. We posted invitations on the board game Reddit, the forum of BoardGameArena, Twitter and Facebook Ads, as well as on two popular Discord channels where enthusiasts discuss conventions and organize tournaments. This group of players are referred to as testers. A $40 or $80 gift card was sent to the testers who successfully completed the required games to encourage participation. Meanwhile, we recruited a group of expert human players from a well-known Discord group to study how well humans can do when playing with unknown partners. The experts have all played Hanabi for more than 500 hours. The experts were paid proportional to the time they spend waiting for and playing with the testers. Each tester signed up for a 45-60 minute time slot. During their session, they were automatically matched with the BR-BC baseline agent, the piKL3 agent and one of the available experts in random order. Usernames of all players including AI agents were randomly decided, to maintain anonymity of the participants and of which players were the experts or the agents. Both AI agents sampled a sleep time t proportional to the entropy of softmax(Q) and waited for at least t seconds before sending the action. This further helped to hide the identity of AI agents and to mitigate the potential side effect that piKL3 and the BR-BC need different amounts of time to compute an action.
The results are shown in Table 2. From the overall result in the first row, we see that both BR-BC and piKL3 outperformed human experts in this task, indicating that playing with a diverse range of players in the zero-shot ad-hoc setting is challenging for humans. Still, it is worth noting that the 2-player no-variant version of Hanabi is not a particularly popular variant within the advanced Hanabi community and many people mostly play with people they know so that they can discuss and perfect strategies after games.
We let testers self-identify as one of four skill levels spanning from newcomer who has just learned the rules to expert who has played extensively. The results for each skill level group together with the number of players from each group are shown in the table as well. Although the standard errors are large due to the small number of games within each group, we can see a trend that the AI agents generally worked better with non-expert players, while experts have a clear lead when collaborating with other experts. This is likely because experts’ behaviors are more predictable and the community has converged to a few well-known convention sets that are easy to identify for the experts who follow them closely in forums and discussion channels. It might also reflect the fact that our training data matches a more diverse pool of players with fewer experts in it, since those experts tend to play on sites other than BoardGameArena, which is our only source of training data.
The difference between BR-BC and piKL3 in this set of results is not significant given the standard errors. Unfortunately, it is challenging to collect more games in this setting due to both the difficulty to recruit enough people with good intent and the logistic overhead to manage anonymous humanhuman matches.
We hypothesized that piKL3 might work better with experts thanks to its abilities to model stronger human players and to be more robust against OOD errors. We design a different experiment to verify this in the next section.
6.2 REPEATED GAMES WITH EXPERTS
In this experiment, we recruited a group of expert players to play with the two AI agents in alternating order for a maximum of 20 games. Each expert started randomly with one of the agents and switched to the other one after each game. They were aware of the matching rule and which AI was in their current game. As an incentive to do well, the players were compensated with $0.6 for every point they got. A terminated game with all life token lost counted as 0 points.
We collected 111 games for the BR-BC agent and 113 games for the piKL3 agent. The numbers are slightly different for the two agents because some games terminated unexpectedly due to platform related technical reasons. The average score and the percentage of perfect games for each agent are shown in Table 3. A perfect game is where the team achieves full score. Although the improvement may seem small numerically, the mechanics of Hanabi makes it increasingly difficult to improve as the score gets closer to a perfect 25. In RL training, for example, learning a 20-point policy from scratch takes roughly the same amount of time and data as improving that policy to a 21-point one.
piKL3 outperformed the BR-BC in terms of both average score and percentage of perfect games. Although the statistical significance of both results is somewhat limited given the amount of data p = 0.097 and p = 0.058, respectively, both are strongly suggestive and consistent with our initial hypothesis. It is also encouraging to see that piKL3 can achieve more perfect games with the experts. Experienced human players are particularly excited about perfect games as they are often significantly harder than getting 23 or 24 in a given deck.
Due to the limitation on budget and the overhead of managing experiments with humans, we were unable to collect enough games to perform ablations for each component of piKL3, nor to include more baselines. In this experiment, we focused on demonstrating the effectiveness of this combination compared to the popular BR-BC baseline. In practice, researchers may use any combination of the three components of piKL3 based on the properties and challenges of their specific domains.
Most existing works targeting human-AI coordination in Hanabi (Hu et al., 2021c; Cui et al., 2021a) do not directly use human data. They get around 16 points with a similar but slightly stronger πBC while piKL3 and BR-BC get around 23. Therefore, despite being interesting, it will not be a fair comparison to include them. Previously, Hu et al. (2020) also tested their agent with human players. However, these results are not directly comparable as they use a different scoring method that keeps all the points when losing all life tokens. Additionally, the population of the human testers have a profound impact on the numerical results.
7 CONCLUSION AND FUTURE WORK
We present piKL3, a three-step algorithm centered around the idea of regularizing search, imitation learning and reinforcement learning towards a policy learned from human data. We performed two large-scale experiments with human players in the Hanabi benchmark to demonstrate the effectiveness of this method and its superiority over the popular baseline of training an ordinary best response to a plain imitation policy. We also for the first time report human experts’ perhaps surprisingly low performance on zero-shot ad-hoc team play with a diverse population.
The main limitation of this method is that it requires large amounts of data to learn the initial human policy used as anchor and blueprint in piKL search. Therefore, an interesting future direction is to extend this method onto more domains, especially ones with less high quality human data. Another direction would be to create personalized regularization that adapts to each individual player in repeated games in an attempt to better model them. | 1. What is the main contribution of the paper, and how does it improve upon previous methods?
2. What are the strengths and weaknesses of the proposed method, particularly regarding its complexity and computational expense?
3. Do you have any concerns or suggestions regarding the evaluation and ablation studies in the paper?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any specific typos or errors in the paper that the reviewer noticed? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper looks to improve on the piKL method for producing good policies for human-AI collaboration. The proposed method, piKL3, uses piKL in three ways to produce a final collaborative policy. The first part, piKL-IL, is an iterative imitation learning algorithm which takes as initial input a behavior-cloned policy based on human data. At each iteration, the algorithm generates data sampled from piKL agents based on the current policy with various values of
λ
. Then, a new policy is trained via behavior cloning on the generated data and the process continues. The idea is to produce a policy with similar advantages to piKL but that better approximates a range of human skill levels. The second part of the algorithm, piKL-BR trains a best response to the output of piKL-IL with additional regularization that encourages the best response to be similar to the original behavior-cloned policy. Finally, the third part further improves the output of piKL-BR by using piKL again with the piKL-BR policy as both the anchor and rollout policy. Empirically, piKL3 seems to improve human-AI performance on Hanabi against a baseline of simply training a best-response to the behavior-cloned policy.
Strengths And Weaknesses
The writing of the paper is quite good and the algorithm is described clearly. There are some nice algorithmic insights in the various pieces of the piKL3 algorithm. The empirical results in Table 3 also show a clear improvement over the baseline of BR-BC.
My main concern with the paper is that piKL3 is both complex and computationally expensive, and I'm not sure the limited evaluation justifies all the complexity. I understand that the human experiments are expensive to run, but it seems like there are a lot of other ways the authors could have done ablations or better evaluated whether all pieces of the algorithm are fully necessary. For instance, one could train a proxy agent using BC on some held-out data (possibly from a different population) not used for the algorithm. Ideally, there would be experiments showing what happens if you train piKL-BR directly on the BC policy rather than on the piKL-IL policy, or what happens if you don't add piKL on top of piKL-BR, etc. With the current evaluation, it's very hard to tell where the improvement over BR-BC is coming from—it could be a combination of all parts of the algorithm, or just one. The authors argue that "in practice, researchers may use any combination of the three components of piKL3 based on the properties and challenges of their specific domains." However, given how expensive some components are (most researchers in academia do not have access to 500 GPUs), it would be ideal if readers could get a better sense of which components are the most important and decide if the computational expense is worth it for each one.
Other specific issues:
In Table 1 I don't understand exactly how
λ
can be varied when evaluating piKL-IL self-play. Isn't the output of piKL-IL, as you describe, a single neural network policy which ends up being the average of piKL policies with various values of
λ
? How can the
λ
value of this be changed when evaluating it?
Where is footnote 3?
Typos:
Page 6: "whose input contains the mean of Gaussian distribution" -> "mean of Gaussian distributions"
Page 9: "all life token lost" -> "all life tokens lost"
Page 9: "the population of the human testers have a profound impact" -> "the population of the human testers has a profound impact"
Clarity, Quality, Novelty And Reproducibility
The clarity and quality of the writing and experiments that are done seems to be quite good. I do think the evaluation is not comprehensive enough. The ways the authors use piKL also seems to be novel. In general there are enough experimental details for reproducibility. |
ICLR | Title
Human-AI Coordination via Human-Regularized Search and Learning
Abstract
We consider the problem of making AI agents that collaborate well with humans in partially observable fully cooperative environments given datasets of human behavior. Inspired by piKL, a human-data-regularized search method that improves upon a behavioral cloning policy without diverging far away from it, we develop a three-step algorithm that achieve strong performance in coordinating with real humans in the Hanabi benchmark. We first use a regularized search algorithm and behavioral cloning to produce a better human model that captures diverse skill levels. Then, we integrate the policy regularization idea into reinforcement learning to train a human-like best response to the human model. Finally, we apply regularized search on top of the best response policy at test time to handle outof-distribution challenges when playing with humans. We evaluate our method in two large scale experiments with humans. First, we show that our method outperforms experts when playing with a group of diverse human players in ad-hoc teams. Second, we show that our method beats a vanilla best response to behavioral cloning baseline by having experts play repeatedly with the two agents.
1 INTRODUCTION
One of the most fundamental goals of artificial intelligence research, especially multi-agent research, is to produce agents that can successfully collaborate with humans to achieve common goals. Although search and reinforcement learning (RL) from scratch without human knowledge have achieved impressive superhuman performance in competitive games (Silver et al., 2017; Brown & Sandholm, 2019), prior works (Hu et al., 2020; Carroll et al., 2019) have shown that agents produced by vanilla multi-agent reinforcement learning do not collaborate well with humans.
A canonical way to obtain agents that collaborate well with humans is to first use behavioral cloning (BC) (Bain & Sammut, 1996) to train a policy that mimics human behavior and then use RL to train a best response (BR policy) to the fixed BC policy. However, such an approach has a few issues. The BC policy is hardly a perfect representation of human play. It may struggle to mimic strong players’ performance without search (Jacob et al., 2022). The BC policy’s response to new conventions developed during BR training is also not well defined. Therefore, the BR policy may develop strategies that exploit those undefined behaviors and confuse humans and causes humans to diverge from routine behaviors or even quit the task because they believe the partner is non-sensible.
Recently, Jacob et al. (2022) introduced piKL, a search technique regularized towards BC policies learned from human data that can produce strong yet human-like policies. In some environments, the regularized search, with the proper amount of regularization, achieves better performance while maintaining or even improving its accuracy when predicting human actions. Inspired by piKL, we propose a three-step algorithm to create agents that can collaborate well with humans in complex partially observable environments. In the first step, we repeatedly apply imitation learning and piKL (piKL-IL) with multiple regularization coefficients to model human players of different skill levels. Secondly, we integrate the regularization idea with RL to train a human-like best response agent (piKL-BR) to the agents from step one. Thirdly and finally, at test time, we apply piKL on the trained best response agent to further improve performance. We call our method piKL3.
We test our method on the challenging benchmark Hanabi (Bard et al., 2020) through large-scale experiments with real human players. We first show that it outperforms human experts when partnering
with a group of unknown human players in an ad hoc setting without prior communication or warmup games. Players were recruited from a diverse player group and have different skill levels. We then evaluate piKL3 when partnered with expert human partners. We find that piKL3 outperforms an RL best response to a behavioral cloning policy (BR-BC) – a strong and established baseline for cooperative agents – in this setting.
2 RELATED WORK
The research on learning to collaborate with humans can be roughly categorized into two groups based on whether or not they rely on human data. With human data, the most straightforward method is behavioral cloning, which uses supervised learning to predict human moves and executes the move with the highest predicted probability. The datasets often contain sub-optimal decisions and mistakes made by humans and behavioral cloning inevitably suffers by training on such data. A few methods from the imitation learning and offline RL community have been proposed to address such issues. For example, conditioning the policy on a reward target (Kumar et al., 2019; Chen et al., 2021) can help guide the policy towards imitating the human behaviors that achieve the maximum future rewards at test time. Behavioral cloning with neural networks alone may struggle to model sufficiently strong humans, especially in complex games that require long-term planning (McIlroyYoung et al., 2020). Jacob et al. (2022) address this issue by regularizing search towards a behavioral cloning policy. The proposed method, piKL, not only improves the overall performance as most search methods do, but also achieves better accuracy when predicting human moves in a wide variety of games compared to the behavioral cloning policy on which it is based.
Human data can also be used in combination with reinforcement learning. Observationally Augmented Self-Play (OSP) (Lerer & Peysakhovich, 2019) augments the normal MARL training procedure with a behavioral cloning loss on a limited amount of data collected from a test time agent to find an equilibrium policy that may work well with that agent. OSP increases the probability of learning conventions that are compatible with the test time agents. However it may not be able to model partners with diverse skill levels given a large aggregation of data from various players. We can also use RL to train a best response policy to the behavioral cloning policy (Carroll et al., 2019). This method is the optimal solution given a perfect human model. In practice, however, RL is prone to overfitting to the imperfections of the human model. In addition, RL alone may not be sufficient in practice to learn superhuman strategies (Silver et al., 2018; Brown & Sandholm, 2019).
A parallel research direction seeks to achieve better human-AI coordination without using any human data. Some of them take inspiration from human behavior or the human learning process. Hu et al. (2020) enforce the RL policies to not break the symmetries in the game arbitrarily, a common practice of human players in certain games. Inspired by humans’ learning and reasoning process, Off-belief learning (Hu et al., 2021c) and K-level reasoning (Costa-Gomes & Crawford, 2006; Cui et al., 2021b) train sequences of policies with increasing cognitive capabilities. Both methods achieve strong performance with a human proxy model trained with behavioral cloning. Another group of methods use population-based training and various diversity metrics (Strouse et al., 2021; Lupu et al., 2021; Tang et al., 2021) to first obtain a set of different policies and then train a common best response that may generalize better to human partners than best response to a single RL policy.
3 BACKGROUND
3.1 DEC-POMDP AND DEEP REINFORCEMENT LEARNING
We consider human-AI coordination in a decentralized partially observable Markov decision process (Dec-POMDP) (Nayyar et al., 2013). A Dec-POMDP consists of N agents indexed by (1, . . . , N), a state space S, a joint action space A = A1 × · · · × AN , a transition function T : S × A → S, a reward function r : S × A → R and a set of observation function oi = Ωi(s), s ∈ S for each agent i. We further assume that the joint actions a and rewards r are observable by all agents. We then define the trajectory of true states until time step t as τt = (s0, a0, r0, . . . , st) and its partially observed counterpart (action-observation history, AOH) for agent i as τ it = (o i 0, a0, r0, . . . , oit). An agent’s policy πi(τ it ) = P (a i t|τ it ) maps each possible AOH to a distribution over the action space of
that agent. We use π to denote the joint policy of all agents and π−i to denote the joint policy of all other agents excluding agent i.
Deep multi-agent RL (MARL) has been successfully applied in many Dec-POMDP environments. Deep MARL algorithms often consist of a strong RL backbone such as (recurrent) DQN (Kapturowski et al., 2019) or PPO (Schulman et al., 2017) and additional modules such as a centralized value function (Yu et al., 2021), value-decomposition (Sunehag et al., 2017; Rashid et al., 2018) to handle challenges posed by having multiple agents. In this paper, we use recurrent DQN to train a best response to fixed policies. Specifically, a recurrent network is trained to model the expected total return for each action given the input AOH, Q(τ it , a) = Eτ∼P (τt|τ it )R(τt) where R(τt) = ∑ t′≥t γ
(t′−t)rt is the sum of discounted future reward by unrolling the joint policy π on the sampled true game trajectory until termination. The joint policy is the greedy policy derived from each agent’s Q-function.
3.2 SEARCH AND REGULARIZED SEARCH IN DEC-POMDP
Search has been critical to achieve superhuman performance in many games (Silver et al., 2018; Brown & Sandholm, 2019; Bakhtin et al., 2021). SPARTA (Lerer et al., 2020) and its faster and more generalized variant Learned Belief Search (Hu et al., 2021b) are competitive and efficient search algorithms in Dec-POMDPs. SPARTA assumes that a joint blueprint policy (BP) π has been agreed on beforehand. In single-agent SPARTA, one agent performs search at every time step assuming that their partners follow the BP. Specifically, the search agent i keeps track of the belief B(τ it ) = P (τt|τ it ,π−1), which is the distribution of the trajectory of states given the AOH and partners’ policies. It picks the action a′ that returns the highest sum of undiscounted future rewards assuming a′ is executed at time t and everyone follows the joint BP afterwards, i.e.
ait = argmax a Qπ(τ i t , a) = argmax a Eτt∼B(τ it )[r(τt, a) +Rπ(τt+1)], (1)
where r(τt, a) is the reward at time t after executing a and Rπ(τt+1) is the sum of future rewards following joint policy π. This notation assumes a deterministic transition function as the randomness can be absorbed into the belief function.
The belief tracking in SPARTA is computationally expensive and may have null support when partners deviate even slightly from the BP. Learned Belief Search (Hu et al., 2021a, LBS) mitigates those problems by using a neural network belief model B̂ trained with data generated by the BP with some exploration. For environments where the observation can be factorized into public and private parts, such as Hanabi, LBS also proposes to use a two-stream architecture where one stream with LSTM takes the public information as input while the other stream consists of only feed-forward layers takes the private information. The outputs of the two streams are fused to compute Q-values. This special architecture further reduces the computation cost as it no longer needs to re-unroll the LSTM from the beginning of the game for each sampled τt.
Although the learned belief technique was originally proposed to speed up SPARTA, it becomes critical in scenarios where the game trajectory does not exactly follow the assumed joint policy. For example, when playing with humans, humans’ real moves may differ from our model’s predictions and the learned belief model can often generalize well in those cases. In this paper we use LBS as the search component in piKL.
Jacob et al. (2022) show that the output policy of search algorithms can diverge quite far from the underlying blueprint policy used for rollouts and value estimation, which is undesirable in environments where collaborating with human partners is crucial. In general, they propose to sample actions following
P (a) ∝ πanc(a|τ it ) · exp [ Qπroll(τ i t , a)
λ
] , (2)
where Qπroll(τ i t , a) is the value output of a search algorithm like SPARTA, Monte Carlo tree search or regret matching using πroll as the blueprint policy for rollouts and/or value estimation, πanc(a|τ it ) is the anchor policy that we want our final policy to be close to and λ is a hyper-parameter controlling the degree of regularization. In fully cooperative games where mixed strategies are not necessary, Jacob et al. (2022) show that a greedy variant that select the argmax works better in practice.
4 METHOD
In this section we introduce piKL3. We first use piKL-IL with a probability distribution over the regularization parameter λ to model human players with varying skill levels. Then, we use RL regularized toward the behavioral cloning policy to train a human-compatible best response piKLBR. Finally, we use piKL-LBS at test time with high regularization toward the BR to fix severe mistakes when playing with real humans.
4.1 PIKL-IL FOR MODELING DIFFERENT LEVELS OF HUMAN PLAY
Algorithm 1 piKL-IL: modeling human with different skill levels. P (λ) can be a discrete uniform distribution over a set of values or over a set of Gaussian distributions centered around those values. piKL-LBS(λi, πroll, B̂, πanc) is a function to act following Eq. 2 or its greedy variant. It samples from the learned approximate belief model τt ∼ B̂(τ it ) to estimate Qπroll .
1: procedure PIKL-IL(πBC , P (λ), k, d) ▷ πBC : behavioral cloning policy trained from human data; ▷ P (λ) : distribution of λ ; ▷ k: number of iterations ▷ d: size of the dataset 2: πpiKL-IL ← πBC 3: for i← 1, . . . , k do 4: Train a belief model B̂ from self-play games of πpiKL-IL 5: Initialize dataset D = ∅ 6: while len(D) < d do 7: Sample λi ∼ P (λ) for every player independently 8: Generate a game g where player i follows piKL-LBS(λi, πpiKL-IL, B̂, πBC) 9: Add the game g to dataset D
10: end while 11: Train a new policy π′ with behavior cloning on D 12: πpiKL-IL ← π′ 13: end for 14: return πpiKL-IL 15: end procedure
PiKL-IL is a search-augmented imitation learning method. It first trains an imitation policy πBC via behavioral cloning (Bain & Sammut, 1996) on a dataset collected from the population of humans we want to model. Then piKL-IL iteratively improves a policy πpiKL-IL, alternating between generating higher quality data with piKL-LBS and training a better model using the generated dataset to produce a new πpiKL-IL. Each iteration of piKL-LBS uses πBC as the anchor policy πanc and πpiKL-IL as the rollout policy πroll in Eq. 2, so that we always anchor the generated data to never differ too much from human play, while using the best rollout policy so far to generate the next. The pseudocode is in Algorithm 1.
piKL was shown to maintain the same or even higher prediction accuracy on human moves while achieving much higher performance with certain λs, indicating that it may be better at modeling stronger human players (Jacob et al., 2022) than behavioral cloning. As λ becomes smaller, the prediction accuracy drops while the performance keeps increasing, moving closer to the unregularized search policy. We therefore use a distribution of λs to generate a spectrum of policies with strength ranging from average human players to exceptional policies that still resemble human behaviors reasonably well. When training a new policy on the generated data, we can condition the policy on the λ so that we can explicitly control the distribution of different skill levels in the subsequent iterations as well as in the piKL-BR of Section 4.2.
Theoretically, we should apply multi-agent piKL-LBS but it is too computationally demanding to generate enough data for imitation learning. Instead we run single-agent piKL-LBS with learned beliefs independently for both players. Prior work (Jacob et al., 2022) shows that although running piKL-LBS independently for more than one player lacks theoretical guarantees because both players are unsoundly assuming that the other player is playing according to the pre-search policy when both
players in fact play according to the post-search policy, the algorithm still achieves high performance since all policies are regularized towards the same πBC , thus the learned belief models are still in practice a good approximation of the true beliefs despite the shift in the underlying policies. For a theoretically sound version, we could alternatively run single-agent piKL-LBS on only one player and only collect training data only from that player’s perspective while fixing the other player to only play the pre-search policy that the learned belief assumes they will.
Once we have collected enough data, we can train a new model with imitation learning and proceed to a new iteration. The process can be repeated until πpiKL-IL stops improving. Note that the anchor policy is always the same human behavior-cloned policy to prevent the final policy from drifting away from human conventions.
The algorithm is presented here in the iterative form. However, if computational resources permit, it can be formulated as an asynchronous RL algorithm similar to AlphaZero (Silver et al., 2018) where πpiKL-IL is constantly trained with data from a replay buffer while many parallel workers generate games with piKL-LBS and add them to the buffer.
4.2 PIKL-BR FOR A HUMAN-LIKE BEST RESPONSE
A popular approach to human-AI coordination is to train a best response to a human model. This BR training is similar to standard single-agent RL in POMDP settings as the partners are fixed policies and thus can be viewed as part of the environment. In practice, this method has a few issues due to the imperfection of the human model as well as the overfitting problem in RL.
Given a normally distributed dataset in which the majority of the humans have intermediate skill levels, the vanilla behavioral-cloning model often converges to an intermediate average score in self-play. Moreover, as observed in Jacob et al. (2022), BC often nontrivially underperforms even the average of the players it is trained on. This can make it hard for the BR agent to learn the true best response to stronger-than-average players, or even to average players. We can address this problem by training a BR πpiKL-BR against the final piKL-IL policy πpiKL-IL instead of the original human behavior cloning policy πBC.
Similar to how single-agent RL can overfit to its exact training environment, an RL best response may overfit to its fixed neural partner, including finding unusual or out-of-distribution actions that happen to cause its partner to perform slightly better actions but that might not generalize to actual human players. In this case, instead of greedily picking an action that has slightly higher return as in normal RL, it would be better to err on the side of what humans tend to do in order to remain in-distribution. To address these issues, we propose to add piKL regularization during BR training.
Specifically, we train a policy πpiKL-BR to be a best response to πpiKL-IL via Q-learning, but we modify the Q-learning update as
Q(τ it , at)← rt(τt, a) + γ ·Q(τ it+1, a′t+1), (3) where a′t+1 = argmax
a [Q(τ it+1, a) + λ · log πBC(τ it+1, a)], (4)
and where the exploration policy is ϵ-Greedy[Q(τ it , a) + λ · log πBC(τ it , a)].The difference from the normal Q-learning is highlighted in red. At test time, ϵ is set to 0. The λ here can be set to a smaller value than that in piKL-IL because the main purpose is no longer modeling human moves but rather regularization and tie-breaking when multiple actions have small differences in expected return.
It is worth noting that if we run piKL-IL with the same small λ for many iterations, then the final πpiKL-IL with the small λ input converges to the same policy as piKL-BR. PiKL-BR is the amortized model-free version of piKL-IL and this step can be omitted if there are enough resources to run piKL-IL for enough iterations with additional λs.
4.3 PIKL-LBS FOR ROBUSTNESS AGAINST OOD ERRORS
The πpiKL-BR policy is a strong human-like policy that performs well with piKL-IL and humans who play similarly. When playing with a diverse group of human players in real life, however, it may suffer from out-of-distribution (OOD) errors when encountering trajectories that have low probability under training distributions. The actions produced by πpiKL-BR on OOD input sequences can be arbitrarily bad as the neural network has never been trained on such data.
However, search or other model-based planning algorithms can mitigate this problem by avoiding the most devastating mistakes, because it often takes only a few steps of simulated rollouts to directly observe the negative outcomes caused by those mistakes. Inspired by this observation, we run piKLLBS at test time on top of πpiKL-BR. We use πpiKL-BR for both of πanc and πroll in Eq. 2 when it is our turn, and assume the partner acts according to πpiKL-IL on their turn. The belief model is trained on data generated by cross-play between piKL-BR and piKL-IL. Since the main purpose of this step is to avoid catastrophic OOD errors that are usually associated with substantially lower Q-values, we can set λ high so the search policy stays close to πpiKL-BR in situations when the Q-values do not substantially differ.
5 EXPERIMENTAL SETUP
We implement and test our method in the Hanabi benchmark environment (Bard et al., 2020). In this section, we introduce the game rules of Hanabi, as well as how we implement piKL3 and a best response to πBC (BR-BC) baseline.
Hanabi is a 2 to 5 player card game. In this paper we use the standard 2-player version. The deck consists of five color suits and each suit has ten cards divided into five ranks with three 1s, two 2s, two 3s, two 4s and one 5. At the beginning, each player draws five cards from the shuffled deck. Players can see other players’ cards but not their own. On each turn, the active player can either hint a color or rank to another player or play or discard a card. Hinting a color or rank, will inform the recipient which cards in their hand have that specific color/rank. Hinting costs an information token and the team starts with eight tokens. The goal of the team to play exactly one card of each rank 1 to 5 of each color suit, in increasing order of rank, The order of plays between different color suits does not matter. A successful play scores the team one point while a failed play one costs one life. If all three lives are lost, the team will get 0 in this game, losing all collected points. The maximum score is 25 points. The team regains a hint token when a card is discarded or when a suit is finished (playing all 5 of a suit successfully). The player draws a new card after a play or discard move. Once the deck is exhausted, the game terminates after each player makes one more final move.
We acquire a dataset of roughly 240K 2-player Hanabi games from BoardGameArena1 to train the human policy πh. The dataset contains all the games played in a certain period on that online platform and we do not perform any filtering. The dataset is randomly split into a training set of 235K games, a validation set of 1K games and a test set of 4K games. The average score of games in the training set is 15.88. The policy πθ is parameterized by a Public-LSTM neural network (See Hu et al. (2021b) or Section 3.2). The policy is trained to minimize the cross-entropy loss L(θ) = −Eτ i∼D ∑ t πθ(a i t|τ it ). Note that it treats the AOH of each player τi as a separate data point for training. Similar to prior works (Hu et al., 2021c), we apply color shuffling Hu et al. (2020) for data augmentation. Every time we sample τi ∼ D, we generate a random permutation f of the five colors, e.g. f : a b c d e→ b d c a e, and apply f to both the input and the target of the training data. This model is trained with Adam (Kingma & Ba, 2014) optimizer until the prediction accuracy on the validation set peaks. The converged πh gets 19.72± 0.10 in self-play and 63.63% in prediction accuracy on the test set. In evaluation, we take the argmax from the policy instead of sampling, which also explains why it achieves higher average score than the training set.
PiKL-LBS requires a learned approximate belief model. In Hanabi, the belief model takes the same AOH τ i as the policy and returns a distribution over player i’s own hand. The hand consists of 5
1en.boardgamearena.com
cards and we can predict them sequentially from the oldest to the newest based on the time they are drawn. The belief network ϕ consists of an LSTM encoder to encode sequence of observations and an LSTM decoder for predicting cards autoregressively. Note that the belief is a function of partner’s policy. The belief for a given policy π partnering with ρ is trained with cross-entropy loss L(ϕ) = −Eτπ∼D(π,ρ) ∑ t ∑ j log pϕ(cj |τπt , c1, · · · .cj−1), where cj is the j-th card in hand to predict. D(π, ρ) is an infinite data stream generated by cross-play using π and ρ and τπ ∼ D means that we only use data from π’s perspective for training. In piKL-IL, we use π = ρ = πroll.
We set P (λ) to be a uniform mixture of truncated Gaussian distributions to model players of diverse skill levels. Specifically, we use Gaussian distributionsN (µ, σ2) truncated at 0 and 2µ with (µ, σ) = (1, 1/4), (2, 2/4), (5, 5/4), (10, 10/4) and each Gaussian is sampled with equal probability. We generate d = 250K games in each iteration to train the new policy and we find that one outer iteration (k = 1 in Algo. 1) is sufficient to achieve good performance in Hanabi. In every LBS step, we perform M = 10K Monte Carlo rollouts evenly distributed over |A| legal actions. We sample M/|A| valid private hands from the belief model to reset the simulator for rollouts. Invalid sampled hands are rejected. With this setting, each game with 2 player running piKL-LBS independently takes roughly 5 minutes with 1 GPU and we use 500 GPUs in parallel for 42 hours to generate the entire dataset. To better imitate policies under different λs, we feed the µ of the Gaussian distribution from which the λ is sampled to the policy network in the form of a one-hot vector concatenated with the input. The self-play performance of the piKL-IL model conditioning on different λ input is shown in the top row of Table 1. Clearly, piKL-IL performs significantly better than πh and the score increases as regularization λ decreases.
The BR is trained under a standard distributed RL setup where many parallel workers generate data with cross-play between the training policy and the fixed IL policy. The generated data is added into a prioritized replay buffer Schaul et al. (2015) and the training loop samples mini-batches of games to update the policy with TD errors. We use the same Public-LSTM architecture for the BR policy as it will also be used in piKL-LBS at test time. The BR policy explores with ϵ-greedy to a distribution of ϵ sampled every new game while the IL policy does not explore but it samples a new λ input from {1, 2, 5, 10} every game. The λ in Eq. 4 is set to 0.1. The cross-play performance between the converged piKL-BR and piKL-IL is shown in the bottom row of Table 1. As expected, piKL-BR is better at collaborating with piKL-IL than piKL-IL itself and the gap shrinks as the regularization λ decreases. The reasons are that piKL-BR is trained with lower regularization and RL can optimize for multi-step best response while search can only optimize for one step.
We run piKL-LBS on the piKL-BR policy with high regularization λ = 2. The search assumes that our blueprint is πbp and our partner always follows πIL, the final output policy of piKL-IL. To avoid predicting the λ input for partner model πIL, we replace it with an imitation learning policy π′IL trained on the same dataset as πIL but without the λ input. The belief model is trained the same way as in piKL-IL but with π = πBR and ρ = π′IL in D(π, ρ). The number of Monte Carlo rollouts per step is reduced to 5K to make it faster and suitable for real world testing.
Finally, we train an unregularized λ = 0 best response to the vanilla behavioral clone policy πBC as our baseline. This agent achieves 23.19 ± 0.03 in cross-play with πBC in convergence. This score is quite high considering that its partner πBC is much worse than the piKL-IL policy, indicating that the unreguarlized BR may be overfitting to the imperfect human model.
6 RESULTS
We carry out two large scale experiments with real humans to evaluate piKL3. The first experiment focuses on ad-hoc team play with a diverse group of players without any prior communication (zeroshot). In the second experiment, we invite a group of expert players to play multiple games with piKL3 and the BR-BC baseline in alternating order to further differentiate the gap between them.
The experiments are hosted on our customized version of the open sourced Hanab.Live platform2. The modified platform disables chat, observe and replay functionalities. Additionally, participants cannot create games themselves nor invite others to form a team. All games are created automatically following the design of the experiments below. We send each player an instruction document of the platform to make them familiar with the UI in advance.
2https://github.com/Hanabi-Live/hanabi-live
6.1 AD-HOC TEAM PLAY
In the first experiment, we recruited players with different skill levels from diverse sources. We posted invitations on the board game Reddit, the forum of BoardGameArena, Twitter and Facebook Ads, as well as on two popular Discord channels where enthusiasts discuss conventions and organize tournaments. This group of players are referred to as testers. A $40 or $80 gift card was sent to the testers who successfully completed the required games to encourage participation. Meanwhile, we recruited a group of expert human players from a well-known Discord group to study how well humans can do when playing with unknown partners. The experts have all played Hanabi for more than 500 hours. The experts were paid proportional to the time they spend waiting for and playing with the testers. Each tester signed up for a 45-60 minute time slot. During their session, they were automatically matched with the BR-BC baseline agent, the piKL3 agent and one of the available experts in random order. Usernames of all players including AI agents were randomly decided, to maintain anonymity of the participants and of which players were the experts or the agents. Both AI agents sampled a sleep time t proportional to the entropy of softmax(Q) and waited for at least t seconds before sending the action. This further helped to hide the identity of AI agents and to mitigate the potential side effect that piKL3 and the BR-BC need different amounts of time to compute an action.
The results are shown in Table 2. From the overall result in the first row, we see that both BR-BC and piKL3 outperformed human experts in this task, indicating that playing with a diverse range of players in the zero-shot ad-hoc setting is challenging for humans. Still, it is worth noting that the 2-player no-variant version of Hanabi is not a particularly popular variant within the advanced Hanabi community and many people mostly play with people they know so that they can discuss and perfect strategies after games.
We let testers self-identify as one of four skill levels spanning from newcomer who has just learned the rules to expert who has played extensively. The results for each skill level group together with the number of players from each group are shown in the table as well. Although the standard errors are large due to the small number of games within each group, we can see a trend that the AI agents generally worked better with non-expert players, while experts have a clear lead when collaborating with other experts. This is likely because experts’ behaviors are more predictable and the community has converged to a few well-known convention sets that are easy to identify for the experts who follow them closely in forums and discussion channels. It might also reflect the fact that our training data matches a more diverse pool of players with fewer experts in it, since those experts tend to play on sites other than BoardGameArena, which is our only source of training data.
The difference between BR-BC and piKL3 in this set of results is not significant given the standard errors. Unfortunately, it is challenging to collect more games in this setting due to both the difficulty to recruit enough people with good intent and the logistic overhead to manage anonymous humanhuman matches.
We hypothesized that piKL3 might work better with experts thanks to its abilities to model stronger human players and to be more robust against OOD errors. We design a different experiment to verify this in the next section.
6.2 REPEATED GAMES WITH EXPERTS
In this experiment, we recruited a group of expert players to play with the two AI agents in alternating order for a maximum of 20 games. Each expert started randomly with one of the agents and switched to the other one after each game. They were aware of the matching rule and which AI was in their current game. As an incentive to do well, the players were compensated with $0.6 for every point they got. A terminated game with all life token lost counted as 0 points.
We collected 111 games for the BR-BC agent and 113 games for the piKL3 agent. The numbers are slightly different for the two agents because some games terminated unexpectedly due to platform related technical reasons. The average score and the percentage of perfect games for each agent are shown in Table 3. A perfect game is where the team achieves full score. Although the improvement may seem small numerically, the mechanics of Hanabi makes it increasingly difficult to improve as the score gets closer to a perfect 25. In RL training, for example, learning a 20-point policy from scratch takes roughly the same amount of time and data as improving that policy to a 21-point one.
piKL3 outperformed the BR-BC in terms of both average score and percentage of perfect games. Although the statistical significance of both results is somewhat limited given the amount of data p = 0.097 and p = 0.058, respectively, both are strongly suggestive and consistent with our initial hypothesis. It is also encouraging to see that piKL3 can achieve more perfect games with the experts. Experienced human players are particularly excited about perfect games as they are often significantly harder than getting 23 or 24 in a given deck.
Due to the limitation on budget and the overhead of managing experiments with humans, we were unable to collect enough games to perform ablations for each component of piKL3, nor to include more baselines. In this experiment, we focused on demonstrating the effectiveness of this combination compared to the popular BR-BC baseline. In practice, researchers may use any combination of the three components of piKL3 based on the properties and challenges of their specific domains.
Most existing works targeting human-AI coordination in Hanabi (Hu et al., 2021c; Cui et al., 2021a) do not directly use human data. They get around 16 points with a similar but slightly stronger πBC while piKL3 and BR-BC get around 23. Therefore, despite being interesting, it will not be a fair comparison to include them. Previously, Hu et al. (2020) also tested their agent with human players. However, these results are not directly comparable as they use a different scoring method that keeps all the points when losing all life tokens. Additionally, the population of the human testers have a profound impact on the numerical results.
7 CONCLUSION AND FUTURE WORK
We present piKL3, a three-step algorithm centered around the idea of regularizing search, imitation learning and reinforcement learning towards a policy learned from human data. We performed two large-scale experiments with human players in the Hanabi benchmark to demonstrate the effectiveness of this method and its superiority over the popular baseline of training an ordinary best response to a plain imitation policy. We also for the first time report human experts’ perhaps surprisingly low performance on zero-shot ad-hoc team play with a diverse population.
The main limitation of this method is that it requires large amounts of data to learn the initial human policy used as anchor and blueprint in piKL search. Therefore, an interesting future direction is to extend this method onto more domains, especially ones with less high quality human data. Another direction would be to create personalized regularization that adapts to each individual player in repeated games in an attempt to better model them. | 1. What is the focus and contribution of the paper regarding multi-agent human-AI collaboration?
2. What are the strengths and weaknesses of the proposed piKL3 method, especially compared to baseline methods?
3. Do you have any concerns or confusion regarding the methodology and its contributions?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper considers the multi-agent human-AI collaboration setting, and the challenge of coordinating with humans. Specifically focusing on the Hanabi benchmark task, the authors propose the piKL3 method which as the following components: (1) a human model that captures diverse skill levels controlled via a \lambda regularization term, (2) train a human-like best response model, and (3) further apply PiKL on the trained best-response model. They evaluate their method on Hanabi, showing marginal improvement over a BC baseline.
Strengths And Weaknesses
While I believe multi-agent human coordination is a challenging and interesting problem, I overall found the paper quite difficult to follow. My concerns include:
The gains of the proposed pikl3 method over baselines are not that strong at all, and is only evaluated on one task (Hanabi)
The methodological contribution largely seems to be via a regularization term, and applying piKL to multiple stages of the multi-agent task, which seems quite limited in novelty to me Minor: \lambda controls the regularization of unconstrained search towards human data, which is a bit different than different human "skill levels", as the spectrum is more about the degree of human-like behavior, making the overall point of different levels a bit confusing
Clarity, Quality, Novelty And Reproducibility
I found the paper quite unclear to follow, with the Experiment details section particularly confusing. I was also confused about the difference between the "2nd" and "3rd" stages of pikl3. There are too many acronyms/sub-methods (pikl-IL, pikl-LBC) that make the overall prose quite challenging to follow. I would encourage the authors to add more subsections and outlining to the experiments section. |
ICLR | Title
Human-AI Coordination via Human-Regularized Search and Learning
Abstract
We consider the problem of making AI agents that collaborate well with humans in partially observable fully cooperative environments given datasets of human behavior. Inspired by piKL, a human-data-regularized search method that improves upon a behavioral cloning policy without diverging far away from it, we develop a three-step algorithm that achieve strong performance in coordinating with real humans in the Hanabi benchmark. We first use a regularized search algorithm and behavioral cloning to produce a better human model that captures diverse skill levels. Then, we integrate the policy regularization idea into reinforcement learning to train a human-like best response to the human model. Finally, we apply regularized search on top of the best response policy at test time to handle outof-distribution challenges when playing with humans. We evaluate our method in two large scale experiments with humans. First, we show that our method outperforms experts when playing with a group of diverse human players in ad-hoc teams. Second, we show that our method beats a vanilla best response to behavioral cloning baseline by having experts play repeatedly with the two agents.
1 INTRODUCTION
One of the most fundamental goals of artificial intelligence research, especially multi-agent research, is to produce agents that can successfully collaborate with humans to achieve common goals. Although search and reinforcement learning (RL) from scratch without human knowledge have achieved impressive superhuman performance in competitive games (Silver et al., 2017; Brown & Sandholm, 2019), prior works (Hu et al., 2020; Carroll et al., 2019) have shown that agents produced by vanilla multi-agent reinforcement learning do not collaborate well with humans.
A canonical way to obtain agents that collaborate well with humans is to first use behavioral cloning (BC) (Bain & Sammut, 1996) to train a policy that mimics human behavior and then use RL to train a best response (BR policy) to the fixed BC policy. However, such an approach has a few issues. The BC policy is hardly a perfect representation of human play. It may struggle to mimic strong players’ performance without search (Jacob et al., 2022). The BC policy’s response to new conventions developed during BR training is also not well defined. Therefore, the BR policy may develop strategies that exploit those undefined behaviors and confuse humans and causes humans to diverge from routine behaviors or even quit the task because they believe the partner is non-sensible.
Recently, Jacob et al. (2022) introduced piKL, a search technique regularized towards BC policies learned from human data that can produce strong yet human-like policies. In some environments, the regularized search, with the proper amount of regularization, achieves better performance while maintaining or even improving its accuracy when predicting human actions. Inspired by piKL, we propose a three-step algorithm to create agents that can collaborate well with humans in complex partially observable environments. In the first step, we repeatedly apply imitation learning and piKL (piKL-IL) with multiple regularization coefficients to model human players of different skill levels. Secondly, we integrate the regularization idea with RL to train a human-like best response agent (piKL-BR) to the agents from step one. Thirdly and finally, at test time, we apply piKL on the trained best response agent to further improve performance. We call our method piKL3.
We test our method on the challenging benchmark Hanabi (Bard et al., 2020) through large-scale experiments with real human players. We first show that it outperforms human experts when partnering
with a group of unknown human players in an ad hoc setting without prior communication or warmup games. Players were recruited from a diverse player group and have different skill levels. We then evaluate piKL3 when partnered with expert human partners. We find that piKL3 outperforms an RL best response to a behavioral cloning policy (BR-BC) – a strong and established baseline for cooperative agents – in this setting.
2 RELATED WORK
The research on learning to collaborate with humans can be roughly categorized into two groups based on whether or not they rely on human data. With human data, the most straightforward method is behavioral cloning, which uses supervised learning to predict human moves and executes the move with the highest predicted probability. The datasets often contain sub-optimal decisions and mistakes made by humans and behavioral cloning inevitably suffers by training on such data. A few methods from the imitation learning and offline RL community have been proposed to address such issues. For example, conditioning the policy on a reward target (Kumar et al., 2019; Chen et al., 2021) can help guide the policy towards imitating the human behaviors that achieve the maximum future rewards at test time. Behavioral cloning with neural networks alone may struggle to model sufficiently strong humans, especially in complex games that require long-term planning (McIlroyYoung et al., 2020). Jacob et al. (2022) address this issue by regularizing search towards a behavioral cloning policy. The proposed method, piKL, not only improves the overall performance as most search methods do, but also achieves better accuracy when predicting human moves in a wide variety of games compared to the behavioral cloning policy on which it is based.
Human data can also be used in combination with reinforcement learning. Observationally Augmented Self-Play (OSP) (Lerer & Peysakhovich, 2019) augments the normal MARL training procedure with a behavioral cloning loss on a limited amount of data collected from a test time agent to find an equilibrium policy that may work well with that agent. OSP increases the probability of learning conventions that are compatible with the test time agents. However it may not be able to model partners with diverse skill levels given a large aggregation of data from various players. We can also use RL to train a best response policy to the behavioral cloning policy (Carroll et al., 2019). This method is the optimal solution given a perfect human model. In practice, however, RL is prone to overfitting to the imperfections of the human model. In addition, RL alone may not be sufficient in practice to learn superhuman strategies (Silver et al., 2018; Brown & Sandholm, 2019).
A parallel research direction seeks to achieve better human-AI coordination without using any human data. Some of them take inspiration from human behavior or the human learning process. Hu et al. (2020) enforce the RL policies to not break the symmetries in the game arbitrarily, a common practice of human players in certain games. Inspired by humans’ learning and reasoning process, Off-belief learning (Hu et al., 2021c) and K-level reasoning (Costa-Gomes & Crawford, 2006; Cui et al., 2021b) train sequences of policies with increasing cognitive capabilities. Both methods achieve strong performance with a human proxy model trained with behavioral cloning. Another group of methods use population-based training and various diversity metrics (Strouse et al., 2021; Lupu et al., 2021; Tang et al., 2021) to first obtain a set of different policies and then train a common best response that may generalize better to human partners than best response to a single RL policy.
3 BACKGROUND
3.1 DEC-POMDP AND DEEP REINFORCEMENT LEARNING
We consider human-AI coordination in a decentralized partially observable Markov decision process (Dec-POMDP) (Nayyar et al., 2013). A Dec-POMDP consists of N agents indexed by (1, . . . , N), a state space S, a joint action space A = A1 × · · · × AN , a transition function T : S × A → S, a reward function r : S × A → R and a set of observation function oi = Ωi(s), s ∈ S for each agent i. We further assume that the joint actions a and rewards r are observable by all agents. We then define the trajectory of true states until time step t as τt = (s0, a0, r0, . . . , st) and its partially observed counterpart (action-observation history, AOH) for agent i as τ it = (o i 0, a0, r0, . . . , oit). An agent’s policy πi(τ it ) = P (a i t|τ it ) maps each possible AOH to a distribution over the action space of
that agent. We use π to denote the joint policy of all agents and π−i to denote the joint policy of all other agents excluding agent i.
Deep multi-agent RL (MARL) has been successfully applied in many Dec-POMDP environments. Deep MARL algorithms often consist of a strong RL backbone such as (recurrent) DQN (Kapturowski et al., 2019) or PPO (Schulman et al., 2017) and additional modules such as a centralized value function (Yu et al., 2021), value-decomposition (Sunehag et al., 2017; Rashid et al., 2018) to handle challenges posed by having multiple agents. In this paper, we use recurrent DQN to train a best response to fixed policies. Specifically, a recurrent network is trained to model the expected total return for each action given the input AOH, Q(τ it , a) = Eτ∼P (τt|τ it )R(τt) where R(τt) = ∑ t′≥t γ
(t′−t)rt is the sum of discounted future reward by unrolling the joint policy π on the sampled true game trajectory until termination. The joint policy is the greedy policy derived from each agent’s Q-function.
3.2 SEARCH AND REGULARIZED SEARCH IN DEC-POMDP
Search has been critical to achieve superhuman performance in many games (Silver et al., 2018; Brown & Sandholm, 2019; Bakhtin et al., 2021). SPARTA (Lerer et al., 2020) and its faster and more generalized variant Learned Belief Search (Hu et al., 2021b) are competitive and efficient search algorithms in Dec-POMDPs. SPARTA assumes that a joint blueprint policy (BP) π has been agreed on beforehand. In single-agent SPARTA, one agent performs search at every time step assuming that their partners follow the BP. Specifically, the search agent i keeps track of the belief B(τ it ) = P (τt|τ it ,π−1), which is the distribution of the trajectory of states given the AOH and partners’ policies. It picks the action a′ that returns the highest sum of undiscounted future rewards assuming a′ is executed at time t and everyone follows the joint BP afterwards, i.e.
ait = argmax a Qπ(τ i t , a) = argmax a Eτt∼B(τ it )[r(τt, a) +Rπ(τt+1)], (1)
where r(τt, a) is the reward at time t after executing a and Rπ(τt+1) is the sum of future rewards following joint policy π. This notation assumes a deterministic transition function as the randomness can be absorbed into the belief function.
The belief tracking in SPARTA is computationally expensive and may have null support when partners deviate even slightly from the BP. Learned Belief Search (Hu et al., 2021a, LBS) mitigates those problems by using a neural network belief model B̂ trained with data generated by the BP with some exploration. For environments where the observation can be factorized into public and private parts, such as Hanabi, LBS also proposes to use a two-stream architecture where one stream with LSTM takes the public information as input while the other stream consists of only feed-forward layers takes the private information. The outputs of the two streams are fused to compute Q-values. This special architecture further reduces the computation cost as it no longer needs to re-unroll the LSTM from the beginning of the game for each sampled τt.
Although the learned belief technique was originally proposed to speed up SPARTA, it becomes critical in scenarios where the game trajectory does not exactly follow the assumed joint policy. For example, when playing with humans, humans’ real moves may differ from our model’s predictions and the learned belief model can often generalize well in those cases. In this paper we use LBS as the search component in piKL.
Jacob et al. (2022) show that the output policy of search algorithms can diverge quite far from the underlying blueprint policy used for rollouts and value estimation, which is undesirable in environments where collaborating with human partners is crucial. In general, they propose to sample actions following
P (a) ∝ πanc(a|τ it ) · exp [ Qπroll(τ i t , a)
λ
] , (2)
where Qπroll(τ i t , a) is the value output of a search algorithm like SPARTA, Monte Carlo tree search or regret matching using πroll as the blueprint policy for rollouts and/or value estimation, πanc(a|τ it ) is the anchor policy that we want our final policy to be close to and λ is a hyper-parameter controlling the degree of regularization. In fully cooperative games where mixed strategies are not necessary, Jacob et al. (2022) show that a greedy variant that select the argmax works better in practice.
4 METHOD
In this section we introduce piKL3. We first use piKL-IL with a probability distribution over the regularization parameter λ to model human players with varying skill levels. Then, we use RL regularized toward the behavioral cloning policy to train a human-compatible best response piKLBR. Finally, we use piKL-LBS at test time with high regularization toward the BR to fix severe mistakes when playing with real humans.
4.1 PIKL-IL FOR MODELING DIFFERENT LEVELS OF HUMAN PLAY
Algorithm 1 piKL-IL: modeling human with different skill levels. P (λ) can be a discrete uniform distribution over a set of values or over a set of Gaussian distributions centered around those values. piKL-LBS(λi, πroll, B̂, πanc) is a function to act following Eq. 2 or its greedy variant. It samples from the learned approximate belief model τt ∼ B̂(τ it ) to estimate Qπroll .
1: procedure PIKL-IL(πBC , P (λ), k, d) ▷ πBC : behavioral cloning policy trained from human data; ▷ P (λ) : distribution of λ ; ▷ k: number of iterations ▷ d: size of the dataset 2: πpiKL-IL ← πBC 3: for i← 1, . . . , k do 4: Train a belief model B̂ from self-play games of πpiKL-IL 5: Initialize dataset D = ∅ 6: while len(D) < d do 7: Sample λi ∼ P (λ) for every player independently 8: Generate a game g where player i follows piKL-LBS(λi, πpiKL-IL, B̂, πBC) 9: Add the game g to dataset D
10: end while 11: Train a new policy π′ with behavior cloning on D 12: πpiKL-IL ← π′ 13: end for 14: return πpiKL-IL 15: end procedure
PiKL-IL is a search-augmented imitation learning method. It first trains an imitation policy πBC via behavioral cloning (Bain & Sammut, 1996) on a dataset collected from the population of humans we want to model. Then piKL-IL iteratively improves a policy πpiKL-IL, alternating between generating higher quality data with piKL-LBS and training a better model using the generated dataset to produce a new πpiKL-IL. Each iteration of piKL-LBS uses πBC as the anchor policy πanc and πpiKL-IL as the rollout policy πroll in Eq. 2, so that we always anchor the generated data to never differ too much from human play, while using the best rollout policy so far to generate the next. The pseudocode is in Algorithm 1.
piKL was shown to maintain the same or even higher prediction accuracy on human moves while achieving much higher performance with certain λs, indicating that it may be better at modeling stronger human players (Jacob et al., 2022) than behavioral cloning. As λ becomes smaller, the prediction accuracy drops while the performance keeps increasing, moving closer to the unregularized search policy. We therefore use a distribution of λs to generate a spectrum of policies with strength ranging from average human players to exceptional policies that still resemble human behaviors reasonably well. When training a new policy on the generated data, we can condition the policy on the λ so that we can explicitly control the distribution of different skill levels in the subsequent iterations as well as in the piKL-BR of Section 4.2.
Theoretically, we should apply multi-agent piKL-LBS but it is too computationally demanding to generate enough data for imitation learning. Instead we run single-agent piKL-LBS with learned beliefs independently for both players. Prior work (Jacob et al., 2022) shows that although running piKL-LBS independently for more than one player lacks theoretical guarantees because both players are unsoundly assuming that the other player is playing according to the pre-search policy when both
players in fact play according to the post-search policy, the algorithm still achieves high performance since all policies are regularized towards the same πBC , thus the learned belief models are still in practice a good approximation of the true beliefs despite the shift in the underlying policies. For a theoretically sound version, we could alternatively run single-agent piKL-LBS on only one player and only collect training data only from that player’s perspective while fixing the other player to only play the pre-search policy that the learned belief assumes they will.
Once we have collected enough data, we can train a new model with imitation learning and proceed to a new iteration. The process can be repeated until πpiKL-IL stops improving. Note that the anchor policy is always the same human behavior-cloned policy to prevent the final policy from drifting away from human conventions.
The algorithm is presented here in the iterative form. However, if computational resources permit, it can be formulated as an asynchronous RL algorithm similar to AlphaZero (Silver et al., 2018) where πpiKL-IL is constantly trained with data from a replay buffer while many parallel workers generate games with piKL-LBS and add them to the buffer.
4.2 PIKL-BR FOR A HUMAN-LIKE BEST RESPONSE
A popular approach to human-AI coordination is to train a best response to a human model. This BR training is similar to standard single-agent RL in POMDP settings as the partners are fixed policies and thus can be viewed as part of the environment. In practice, this method has a few issues due to the imperfection of the human model as well as the overfitting problem in RL.
Given a normally distributed dataset in which the majority of the humans have intermediate skill levels, the vanilla behavioral-cloning model often converges to an intermediate average score in self-play. Moreover, as observed in Jacob et al. (2022), BC often nontrivially underperforms even the average of the players it is trained on. This can make it hard for the BR agent to learn the true best response to stronger-than-average players, or even to average players. We can address this problem by training a BR πpiKL-BR against the final piKL-IL policy πpiKL-IL instead of the original human behavior cloning policy πBC.
Similar to how single-agent RL can overfit to its exact training environment, an RL best response may overfit to its fixed neural partner, including finding unusual or out-of-distribution actions that happen to cause its partner to perform slightly better actions but that might not generalize to actual human players. In this case, instead of greedily picking an action that has slightly higher return as in normal RL, it would be better to err on the side of what humans tend to do in order to remain in-distribution. To address these issues, we propose to add piKL regularization during BR training.
Specifically, we train a policy πpiKL-BR to be a best response to πpiKL-IL via Q-learning, but we modify the Q-learning update as
Q(τ it , at)← rt(τt, a) + γ ·Q(τ it+1, a′t+1), (3) where a′t+1 = argmax
a [Q(τ it+1, a) + λ · log πBC(τ it+1, a)], (4)
and where the exploration policy is ϵ-Greedy[Q(τ it , a) + λ · log πBC(τ it , a)].The difference from the normal Q-learning is highlighted in red. At test time, ϵ is set to 0. The λ here can be set to a smaller value than that in piKL-IL because the main purpose is no longer modeling human moves but rather regularization and tie-breaking when multiple actions have small differences in expected return.
It is worth noting that if we run piKL-IL with the same small λ for many iterations, then the final πpiKL-IL with the small λ input converges to the same policy as piKL-BR. PiKL-BR is the amortized model-free version of piKL-IL and this step can be omitted if there are enough resources to run piKL-IL for enough iterations with additional λs.
4.3 PIKL-LBS FOR ROBUSTNESS AGAINST OOD ERRORS
The πpiKL-BR policy is a strong human-like policy that performs well with piKL-IL and humans who play similarly. When playing with a diverse group of human players in real life, however, it may suffer from out-of-distribution (OOD) errors when encountering trajectories that have low probability under training distributions. The actions produced by πpiKL-BR on OOD input sequences can be arbitrarily bad as the neural network has never been trained on such data.
However, search or other model-based planning algorithms can mitigate this problem by avoiding the most devastating mistakes, because it often takes only a few steps of simulated rollouts to directly observe the negative outcomes caused by those mistakes. Inspired by this observation, we run piKLLBS at test time on top of πpiKL-BR. We use πpiKL-BR for both of πanc and πroll in Eq. 2 when it is our turn, and assume the partner acts according to πpiKL-IL on their turn. The belief model is trained on data generated by cross-play between piKL-BR and piKL-IL. Since the main purpose of this step is to avoid catastrophic OOD errors that are usually associated with substantially lower Q-values, we can set λ high so the search policy stays close to πpiKL-BR in situations when the Q-values do not substantially differ.
5 EXPERIMENTAL SETUP
We implement and test our method in the Hanabi benchmark environment (Bard et al., 2020). In this section, we introduce the game rules of Hanabi, as well as how we implement piKL3 and a best response to πBC (BR-BC) baseline.
Hanabi is a 2 to 5 player card game. In this paper we use the standard 2-player version. The deck consists of five color suits and each suit has ten cards divided into five ranks with three 1s, two 2s, two 3s, two 4s and one 5. At the beginning, each player draws five cards from the shuffled deck. Players can see other players’ cards but not their own. On each turn, the active player can either hint a color or rank to another player or play or discard a card. Hinting a color or rank, will inform the recipient which cards in their hand have that specific color/rank. Hinting costs an information token and the team starts with eight tokens. The goal of the team to play exactly one card of each rank 1 to 5 of each color suit, in increasing order of rank, The order of plays between different color suits does not matter. A successful play scores the team one point while a failed play one costs one life. If all three lives are lost, the team will get 0 in this game, losing all collected points. The maximum score is 25 points. The team regains a hint token when a card is discarded or when a suit is finished (playing all 5 of a suit successfully). The player draws a new card after a play or discard move. Once the deck is exhausted, the game terminates after each player makes one more final move.
We acquire a dataset of roughly 240K 2-player Hanabi games from BoardGameArena1 to train the human policy πh. The dataset contains all the games played in a certain period on that online platform and we do not perform any filtering. The dataset is randomly split into a training set of 235K games, a validation set of 1K games and a test set of 4K games. The average score of games in the training set is 15.88. The policy πθ is parameterized by a Public-LSTM neural network (See Hu et al. (2021b) or Section 3.2). The policy is trained to minimize the cross-entropy loss L(θ) = −Eτ i∼D ∑ t πθ(a i t|τ it ). Note that it treats the AOH of each player τi as a separate data point for training. Similar to prior works (Hu et al., 2021c), we apply color shuffling Hu et al. (2020) for data augmentation. Every time we sample τi ∼ D, we generate a random permutation f of the five colors, e.g. f : a b c d e→ b d c a e, and apply f to both the input and the target of the training data. This model is trained with Adam (Kingma & Ba, 2014) optimizer until the prediction accuracy on the validation set peaks. The converged πh gets 19.72± 0.10 in self-play and 63.63% in prediction accuracy on the test set. In evaluation, we take the argmax from the policy instead of sampling, which also explains why it achieves higher average score than the training set.
PiKL-LBS requires a learned approximate belief model. In Hanabi, the belief model takes the same AOH τ i as the policy and returns a distribution over player i’s own hand. The hand consists of 5
1en.boardgamearena.com
cards and we can predict them sequentially from the oldest to the newest based on the time they are drawn. The belief network ϕ consists of an LSTM encoder to encode sequence of observations and an LSTM decoder for predicting cards autoregressively. Note that the belief is a function of partner’s policy. The belief for a given policy π partnering with ρ is trained with cross-entropy loss L(ϕ) = −Eτπ∼D(π,ρ) ∑ t ∑ j log pϕ(cj |τπt , c1, · · · .cj−1), where cj is the j-th card in hand to predict. D(π, ρ) is an infinite data stream generated by cross-play using π and ρ and τπ ∼ D means that we only use data from π’s perspective for training. In piKL-IL, we use π = ρ = πroll.
We set P (λ) to be a uniform mixture of truncated Gaussian distributions to model players of diverse skill levels. Specifically, we use Gaussian distributionsN (µ, σ2) truncated at 0 and 2µ with (µ, σ) = (1, 1/4), (2, 2/4), (5, 5/4), (10, 10/4) and each Gaussian is sampled with equal probability. We generate d = 250K games in each iteration to train the new policy and we find that one outer iteration (k = 1 in Algo. 1) is sufficient to achieve good performance in Hanabi. In every LBS step, we perform M = 10K Monte Carlo rollouts evenly distributed over |A| legal actions. We sample M/|A| valid private hands from the belief model to reset the simulator for rollouts. Invalid sampled hands are rejected. With this setting, each game with 2 player running piKL-LBS independently takes roughly 5 minutes with 1 GPU and we use 500 GPUs in parallel for 42 hours to generate the entire dataset. To better imitate policies under different λs, we feed the µ of the Gaussian distribution from which the λ is sampled to the policy network in the form of a one-hot vector concatenated with the input. The self-play performance of the piKL-IL model conditioning on different λ input is shown in the top row of Table 1. Clearly, piKL-IL performs significantly better than πh and the score increases as regularization λ decreases.
The BR is trained under a standard distributed RL setup where many parallel workers generate data with cross-play between the training policy and the fixed IL policy. The generated data is added into a prioritized replay buffer Schaul et al. (2015) and the training loop samples mini-batches of games to update the policy with TD errors. We use the same Public-LSTM architecture for the BR policy as it will also be used in piKL-LBS at test time. The BR policy explores with ϵ-greedy to a distribution of ϵ sampled every new game while the IL policy does not explore but it samples a new λ input from {1, 2, 5, 10} every game. The λ in Eq. 4 is set to 0.1. The cross-play performance between the converged piKL-BR and piKL-IL is shown in the bottom row of Table 1. As expected, piKL-BR is better at collaborating with piKL-IL than piKL-IL itself and the gap shrinks as the regularization λ decreases. The reasons are that piKL-BR is trained with lower regularization and RL can optimize for multi-step best response while search can only optimize for one step.
We run piKL-LBS on the piKL-BR policy with high regularization λ = 2. The search assumes that our blueprint is πbp and our partner always follows πIL, the final output policy of piKL-IL. To avoid predicting the λ input for partner model πIL, we replace it with an imitation learning policy π′IL trained on the same dataset as πIL but without the λ input. The belief model is trained the same way as in piKL-IL but with π = πBR and ρ = π′IL in D(π, ρ). The number of Monte Carlo rollouts per step is reduced to 5K to make it faster and suitable for real world testing.
Finally, we train an unregularized λ = 0 best response to the vanilla behavioral clone policy πBC as our baseline. This agent achieves 23.19 ± 0.03 in cross-play with πBC in convergence. This score is quite high considering that its partner πBC is much worse than the piKL-IL policy, indicating that the unreguarlized BR may be overfitting to the imperfect human model.
6 RESULTS
We carry out two large scale experiments with real humans to evaluate piKL3. The first experiment focuses on ad-hoc team play with a diverse group of players without any prior communication (zeroshot). In the second experiment, we invite a group of expert players to play multiple games with piKL3 and the BR-BC baseline in alternating order to further differentiate the gap between them.
The experiments are hosted on our customized version of the open sourced Hanab.Live platform2. The modified platform disables chat, observe and replay functionalities. Additionally, participants cannot create games themselves nor invite others to form a team. All games are created automatically following the design of the experiments below. We send each player an instruction document of the platform to make them familiar with the UI in advance.
2https://github.com/Hanabi-Live/hanabi-live
6.1 AD-HOC TEAM PLAY
In the first experiment, we recruited players with different skill levels from diverse sources. We posted invitations on the board game Reddit, the forum of BoardGameArena, Twitter and Facebook Ads, as well as on two popular Discord channels where enthusiasts discuss conventions and organize tournaments. This group of players are referred to as testers. A $40 or $80 gift card was sent to the testers who successfully completed the required games to encourage participation. Meanwhile, we recruited a group of expert human players from a well-known Discord group to study how well humans can do when playing with unknown partners. The experts have all played Hanabi for more than 500 hours. The experts were paid proportional to the time they spend waiting for and playing with the testers. Each tester signed up for a 45-60 minute time slot. During their session, they were automatically matched with the BR-BC baseline agent, the piKL3 agent and one of the available experts in random order. Usernames of all players including AI agents were randomly decided, to maintain anonymity of the participants and of which players were the experts or the agents. Both AI agents sampled a sleep time t proportional to the entropy of softmax(Q) and waited for at least t seconds before sending the action. This further helped to hide the identity of AI agents and to mitigate the potential side effect that piKL3 and the BR-BC need different amounts of time to compute an action.
The results are shown in Table 2. From the overall result in the first row, we see that both BR-BC and piKL3 outperformed human experts in this task, indicating that playing with a diverse range of players in the zero-shot ad-hoc setting is challenging for humans. Still, it is worth noting that the 2-player no-variant version of Hanabi is not a particularly popular variant within the advanced Hanabi community and many people mostly play with people they know so that they can discuss and perfect strategies after games.
We let testers self-identify as one of four skill levels spanning from newcomer who has just learned the rules to expert who has played extensively. The results for each skill level group together with the number of players from each group are shown in the table as well. Although the standard errors are large due to the small number of games within each group, we can see a trend that the AI agents generally worked better with non-expert players, while experts have a clear lead when collaborating with other experts. This is likely because experts’ behaviors are more predictable and the community has converged to a few well-known convention sets that are easy to identify for the experts who follow them closely in forums and discussion channels. It might also reflect the fact that our training data matches a more diverse pool of players with fewer experts in it, since those experts tend to play on sites other than BoardGameArena, which is our only source of training data.
The difference between BR-BC and piKL3 in this set of results is not significant given the standard errors. Unfortunately, it is challenging to collect more games in this setting due to both the difficulty to recruit enough people with good intent and the logistic overhead to manage anonymous humanhuman matches.
We hypothesized that piKL3 might work better with experts thanks to its abilities to model stronger human players and to be more robust against OOD errors. We design a different experiment to verify this in the next section.
6.2 REPEATED GAMES WITH EXPERTS
In this experiment, we recruited a group of expert players to play with the two AI agents in alternating order for a maximum of 20 games. Each expert started randomly with one of the agents and switched to the other one after each game. They were aware of the matching rule and which AI was in their current game. As an incentive to do well, the players were compensated with $0.6 for every point they got. A terminated game with all life token lost counted as 0 points.
We collected 111 games for the BR-BC agent and 113 games for the piKL3 agent. The numbers are slightly different for the two agents because some games terminated unexpectedly due to platform related technical reasons. The average score and the percentage of perfect games for each agent are shown in Table 3. A perfect game is where the team achieves full score. Although the improvement may seem small numerically, the mechanics of Hanabi makes it increasingly difficult to improve as the score gets closer to a perfect 25. In RL training, for example, learning a 20-point policy from scratch takes roughly the same amount of time and data as improving that policy to a 21-point one.
piKL3 outperformed the BR-BC in terms of both average score and percentage of perfect games. Although the statistical significance of both results is somewhat limited given the amount of data p = 0.097 and p = 0.058, respectively, both are strongly suggestive and consistent with our initial hypothesis. It is also encouraging to see that piKL3 can achieve more perfect games with the experts. Experienced human players are particularly excited about perfect games as they are often significantly harder than getting 23 or 24 in a given deck.
Due to the limitation on budget and the overhead of managing experiments with humans, we were unable to collect enough games to perform ablations for each component of piKL3, nor to include more baselines. In this experiment, we focused on demonstrating the effectiveness of this combination compared to the popular BR-BC baseline. In practice, researchers may use any combination of the three components of piKL3 based on the properties and challenges of their specific domains.
Most existing works targeting human-AI coordination in Hanabi (Hu et al., 2021c; Cui et al., 2021a) do not directly use human data. They get around 16 points with a similar but slightly stronger πBC while piKL3 and BR-BC get around 23. Therefore, despite being interesting, it will not be a fair comparison to include them. Previously, Hu et al. (2020) also tested their agent with human players. However, these results are not directly comparable as they use a different scoring method that keeps all the points when losing all life tokens. Additionally, the population of the human testers have a profound impact on the numerical results.
7 CONCLUSION AND FUTURE WORK
We present piKL3, a three-step algorithm centered around the idea of regularizing search, imitation learning and reinforcement learning towards a policy learned from human data. We performed two large-scale experiments with human players in the Hanabi benchmark to demonstrate the effectiveness of this method and its superiority over the popular baseline of training an ordinary best response to a plain imitation policy. We also for the first time report human experts’ perhaps surprisingly low performance on zero-shot ad-hoc team play with a diverse population.
The main limitation of this method is that it requires large amounts of data to learn the initial human policy used as anchor and blueprint in piKL search. Therefore, an interesting future direction is to extend this method onto more domains, especially ones with less high quality human data. Another direction would be to create personalized regularization that adapts to each individual player in repeated games in an attempt to better model them. | 1. What is the focus and contribution of the paper on coordinating human-like policies with humans?
2. What are the strengths of the proposed approach, particularly in modeling humans of varying skill levels?
3. What are the weaknesses of the paper, especially regarding its background and related work sections?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Do you have any concerns or questions about the feasibility of the proposed technique in other environments besides Hanabi? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The work extends piKL (an existing work for learning human-like policies) to coordinate with humans. They showcase their results on Hanabi benchmark.
Strengths And Weaknesses
Strengths :
I agree to the problem statement of being of high importance with upcoming HiL methods where coordination with the human model / policy should be accounted for.
I like the fact that the work attempts to model humans of varying skill levels.
Weaknesses :
I was quite disappointed with the background section as the paper mentions piKL several times and admits to extend that work, but fails to provide any background details about this existing work.
Why Partial Observability is so important? Again, I feel with improved related work sections this could have been solved. The work mentions two key things in their setup - need for coordination with real humans as well as that agents exist in partially observable environments. Are there works that have already solved full observability?
The work keeps mentioning SPARTA several times in the related work section & some of its details - but not piKL? Is there a reason for it?
How feasible is such a technique in other environments (other than Hanabi) where 200k + simulations may not be available?
Clarity, Quality, Novelty And Reproducibility
I found that the paper lacked clarity regarding the overall objective and the methodology proposed. Without the introduction of piKL, it was extremely challenging to realize the different variants as piKL IL, piKL - LBS, piKL BR. I feel that I could have appreciated the work much better had it been more clear. |
ICLR | Title
Human-AI Coordination via Human-Regularized Search and Learning
Abstract
We consider the problem of making AI agents that collaborate well with humans in partially observable fully cooperative environments given datasets of human behavior. Inspired by piKL, a human-data-regularized search method that improves upon a behavioral cloning policy without diverging far away from it, we develop a three-step algorithm that achieve strong performance in coordinating with real humans in the Hanabi benchmark. We first use a regularized search algorithm and behavioral cloning to produce a better human model that captures diverse skill levels. Then, we integrate the policy regularization idea into reinforcement learning to train a human-like best response to the human model. Finally, we apply regularized search on top of the best response policy at test time to handle outof-distribution challenges when playing with humans. We evaluate our method in two large scale experiments with humans. First, we show that our method outperforms experts when playing with a group of diverse human players in ad-hoc teams. Second, we show that our method beats a vanilla best response to behavioral cloning baseline by having experts play repeatedly with the two agents.
1 INTRODUCTION
One of the most fundamental goals of artificial intelligence research, especially multi-agent research, is to produce agents that can successfully collaborate with humans to achieve common goals. Although search and reinforcement learning (RL) from scratch without human knowledge have achieved impressive superhuman performance in competitive games (Silver et al., 2017; Brown & Sandholm, 2019), prior works (Hu et al., 2020; Carroll et al., 2019) have shown that agents produced by vanilla multi-agent reinforcement learning do not collaborate well with humans.
A canonical way to obtain agents that collaborate well with humans is to first use behavioral cloning (BC) (Bain & Sammut, 1996) to train a policy that mimics human behavior and then use RL to train a best response (BR policy) to the fixed BC policy. However, such an approach has a few issues. The BC policy is hardly a perfect representation of human play. It may struggle to mimic strong players’ performance without search (Jacob et al., 2022). The BC policy’s response to new conventions developed during BR training is also not well defined. Therefore, the BR policy may develop strategies that exploit those undefined behaviors and confuse humans and causes humans to diverge from routine behaviors or even quit the task because they believe the partner is non-sensible.
Recently, Jacob et al. (2022) introduced piKL, a search technique regularized towards BC policies learned from human data that can produce strong yet human-like policies. In some environments, the regularized search, with the proper amount of regularization, achieves better performance while maintaining or even improving its accuracy when predicting human actions. Inspired by piKL, we propose a three-step algorithm to create agents that can collaborate well with humans in complex partially observable environments. In the first step, we repeatedly apply imitation learning and piKL (piKL-IL) with multiple regularization coefficients to model human players of different skill levels. Secondly, we integrate the regularization idea with RL to train a human-like best response agent (piKL-BR) to the agents from step one. Thirdly and finally, at test time, we apply piKL on the trained best response agent to further improve performance. We call our method piKL3.
We test our method on the challenging benchmark Hanabi (Bard et al., 2020) through large-scale experiments with real human players. We first show that it outperforms human experts when partnering
with a group of unknown human players in an ad hoc setting without prior communication or warmup games. Players were recruited from a diverse player group and have different skill levels. We then evaluate piKL3 when partnered with expert human partners. We find that piKL3 outperforms an RL best response to a behavioral cloning policy (BR-BC) – a strong and established baseline for cooperative agents – in this setting.
2 RELATED WORK
The research on learning to collaborate with humans can be roughly categorized into two groups based on whether or not they rely on human data. With human data, the most straightforward method is behavioral cloning, which uses supervised learning to predict human moves and executes the move with the highest predicted probability. The datasets often contain sub-optimal decisions and mistakes made by humans and behavioral cloning inevitably suffers by training on such data. A few methods from the imitation learning and offline RL community have been proposed to address such issues. For example, conditioning the policy on a reward target (Kumar et al., 2019; Chen et al., 2021) can help guide the policy towards imitating the human behaviors that achieve the maximum future rewards at test time. Behavioral cloning with neural networks alone may struggle to model sufficiently strong humans, especially in complex games that require long-term planning (McIlroyYoung et al., 2020). Jacob et al. (2022) address this issue by regularizing search towards a behavioral cloning policy. The proposed method, piKL, not only improves the overall performance as most search methods do, but also achieves better accuracy when predicting human moves in a wide variety of games compared to the behavioral cloning policy on which it is based.
Human data can also be used in combination with reinforcement learning. Observationally Augmented Self-Play (OSP) (Lerer & Peysakhovich, 2019) augments the normal MARL training procedure with a behavioral cloning loss on a limited amount of data collected from a test time agent to find an equilibrium policy that may work well with that agent. OSP increases the probability of learning conventions that are compatible with the test time agents. However it may not be able to model partners with diverse skill levels given a large aggregation of data from various players. We can also use RL to train a best response policy to the behavioral cloning policy (Carroll et al., 2019). This method is the optimal solution given a perfect human model. In practice, however, RL is prone to overfitting to the imperfections of the human model. In addition, RL alone may not be sufficient in practice to learn superhuman strategies (Silver et al., 2018; Brown & Sandholm, 2019).
A parallel research direction seeks to achieve better human-AI coordination without using any human data. Some of them take inspiration from human behavior or the human learning process. Hu et al. (2020) enforce the RL policies to not break the symmetries in the game arbitrarily, a common practice of human players in certain games. Inspired by humans’ learning and reasoning process, Off-belief learning (Hu et al., 2021c) and K-level reasoning (Costa-Gomes & Crawford, 2006; Cui et al., 2021b) train sequences of policies with increasing cognitive capabilities. Both methods achieve strong performance with a human proxy model trained with behavioral cloning. Another group of methods use population-based training and various diversity metrics (Strouse et al., 2021; Lupu et al., 2021; Tang et al., 2021) to first obtain a set of different policies and then train a common best response that may generalize better to human partners than best response to a single RL policy.
3 BACKGROUND
3.1 DEC-POMDP AND DEEP REINFORCEMENT LEARNING
We consider human-AI coordination in a decentralized partially observable Markov decision process (Dec-POMDP) (Nayyar et al., 2013). A Dec-POMDP consists of N agents indexed by (1, . . . , N), a state space S, a joint action space A = A1 × · · · × AN , a transition function T : S × A → S, a reward function r : S × A → R and a set of observation function oi = Ωi(s), s ∈ S for each agent i. We further assume that the joint actions a and rewards r are observable by all agents. We then define the trajectory of true states until time step t as τt = (s0, a0, r0, . . . , st) and its partially observed counterpart (action-observation history, AOH) for agent i as τ it = (o i 0, a0, r0, . . . , oit). An agent’s policy πi(τ it ) = P (a i t|τ it ) maps each possible AOH to a distribution over the action space of
that agent. We use π to denote the joint policy of all agents and π−i to denote the joint policy of all other agents excluding agent i.
Deep multi-agent RL (MARL) has been successfully applied in many Dec-POMDP environments. Deep MARL algorithms often consist of a strong RL backbone such as (recurrent) DQN (Kapturowski et al., 2019) or PPO (Schulman et al., 2017) and additional modules such as a centralized value function (Yu et al., 2021), value-decomposition (Sunehag et al., 2017; Rashid et al., 2018) to handle challenges posed by having multiple agents. In this paper, we use recurrent DQN to train a best response to fixed policies. Specifically, a recurrent network is trained to model the expected total return for each action given the input AOH, Q(τ it , a) = Eτ∼P (τt|τ it )R(τt) where R(τt) = ∑ t′≥t γ
(t′−t)rt is the sum of discounted future reward by unrolling the joint policy π on the sampled true game trajectory until termination. The joint policy is the greedy policy derived from each agent’s Q-function.
3.2 SEARCH AND REGULARIZED SEARCH IN DEC-POMDP
Search has been critical to achieve superhuman performance in many games (Silver et al., 2018; Brown & Sandholm, 2019; Bakhtin et al., 2021). SPARTA (Lerer et al., 2020) and its faster and more generalized variant Learned Belief Search (Hu et al., 2021b) are competitive and efficient search algorithms in Dec-POMDPs. SPARTA assumes that a joint blueprint policy (BP) π has been agreed on beforehand. In single-agent SPARTA, one agent performs search at every time step assuming that their partners follow the BP. Specifically, the search agent i keeps track of the belief B(τ it ) = P (τt|τ it ,π−1), which is the distribution of the trajectory of states given the AOH and partners’ policies. It picks the action a′ that returns the highest sum of undiscounted future rewards assuming a′ is executed at time t and everyone follows the joint BP afterwards, i.e.
ait = argmax a Qπ(τ i t , a) = argmax a Eτt∼B(τ it )[r(τt, a) +Rπ(τt+1)], (1)
where r(τt, a) is the reward at time t after executing a and Rπ(τt+1) is the sum of future rewards following joint policy π. This notation assumes a deterministic transition function as the randomness can be absorbed into the belief function.
The belief tracking in SPARTA is computationally expensive and may have null support when partners deviate even slightly from the BP. Learned Belief Search (Hu et al., 2021a, LBS) mitigates those problems by using a neural network belief model B̂ trained with data generated by the BP with some exploration. For environments where the observation can be factorized into public and private parts, such as Hanabi, LBS also proposes to use a two-stream architecture where one stream with LSTM takes the public information as input while the other stream consists of only feed-forward layers takes the private information. The outputs of the two streams are fused to compute Q-values. This special architecture further reduces the computation cost as it no longer needs to re-unroll the LSTM from the beginning of the game for each sampled τt.
Although the learned belief technique was originally proposed to speed up SPARTA, it becomes critical in scenarios where the game trajectory does not exactly follow the assumed joint policy. For example, when playing with humans, humans’ real moves may differ from our model’s predictions and the learned belief model can often generalize well in those cases. In this paper we use LBS as the search component in piKL.
Jacob et al. (2022) show that the output policy of search algorithms can diverge quite far from the underlying blueprint policy used for rollouts and value estimation, which is undesirable in environments where collaborating with human partners is crucial. In general, they propose to sample actions following
P (a) ∝ πanc(a|τ it ) · exp [ Qπroll(τ i t , a)
λ
] , (2)
where Qπroll(τ i t , a) is the value output of a search algorithm like SPARTA, Monte Carlo tree search or regret matching using πroll as the blueprint policy for rollouts and/or value estimation, πanc(a|τ it ) is the anchor policy that we want our final policy to be close to and λ is a hyper-parameter controlling the degree of regularization. In fully cooperative games where mixed strategies are not necessary, Jacob et al. (2022) show that a greedy variant that select the argmax works better in practice.
4 METHOD
In this section we introduce piKL3. We first use piKL-IL with a probability distribution over the regularization parameter λ to model human players with varying skill levels. Then, we use RL regularized toward the behavioral cloning policy to train a human-compatible best response piKLBR. Finally, we use piKL-LBS at test time with high regularization toward the BR to fix severe mistakes when playing with real humans.
4.1 PIKL-IL FOR MODELING DIFFERENT LEVELS OF HUMAN PLAY
Algorithm 1 piKL-IL: modeling human with different skill levels. P (λ) can be a discrete uniform distribution over a set of values or over a set of Gaussian distributions centered around those values. piKL-LBS(λi, πroll, B̂, πanc) is a function to act following Eq. 2 or its greedy variant. It samples from the learned approximate belief model τt ∼ B̂(τ it ) to estimate Qπroll .
1: procedure PIKL-IL(πBC , P (λ), k, d) ▷ πBC : behavioral cloning policy trained from human data; ▷ P (λ) : distribution of λ ; ▷ k: number of iterations ▷ d: size of the dataset 2: πpiKL-IL ← πBC 3: for i← 1, . . . , k do 4: Train a belief model B̂ from self-play games of πpiKL-IL 5: Initialize dataset D = ∅ 6: while len(D) < d do 7: Sample λi ∼ P (λ) for every player independently 8: Generate a game g where player i follows piKL-LBS(λi, πpiKL-IL, B̂, πBC) 9: Add the game g to dataset D
10: end while 11: Train a new policy π′ with behavior cloning on D 12: πpiKL-IL ← π′ 13: end for 14: return πpiKL-IL 15: end procedure
PiKL-IL is a search-augmented imitation learning method. It first trains an imitation policy πBC via behavioral cloning (Bain & Sammut, 1996) on a dataset collected from the population of humans we want to model. Then piKL-IL iteratively improves a policy πpiKL-IL, alternating between generating higher quality data with piKL-LBS and training a better model using the generated dataset to produce a new πpiKL-IL. Each iteration of piKL-LBS uses πBC as the anchor policy πanc and πpiKL-IL as the rollout policy πroll in Eq. 2, so that we always anchor the generated data to never differ too much from human play, while using the best rollout policy so far to generate the next. The pseudocode is in Algorithm 1.
piKL was shown to maintain the same or even higher prediction accuracy on human moves while achieving much higher performance with certain λs, indicating that it may be better at modeling stronger human players (Jacob et al., 2022) than behavioral cloning. As λ becomes smaller, the prediction accuracy drops while the performance keeps increasing, moving closer to the unregularized search policy. We therefore use a distribution of λs to generate a spectrum of policies with strength ranging from average human players to exceptional policies that still resemble human behaviors reasonably well. When training a new policy on the generated data, we can condition the policy on the λ so that we can explicitly control the distribution of different skill levels in the subsequent iterations as well as in the piKL-BR of Section 4.2.
Theoretically, we should apply multi-agent piKL-LBS but it is too computationally demanding to generate enough data for imitation learning. Instead we run single-agent piKL-LBS with learned beliefs independently for both players. Prior work (Jacob et al., 2022) shows that although running piKL-LBS independently for more than one player lacks theoretical guarantees because both players are unsoundly assuming that the other player is playing according to the pre-search policy when both
players in fact play according to the post-search policy, the algorithm still achieves high performance since all policies are regularized towards the same πBC , thus the learned belief models are still in practice a good approximation of the true beliefs despite the shift in the underlying policies. For a theoretically sound version, we could alternatively run single-agent piKL-LBS on only one player and only collect training data only from that player’s perspective while fixing the other player to only play the pre-search policy that the learned belief assumes they will.
Once we have collected enough data, we can train a new model with imitation learning and proceed to a new iteration. The process can be repeated until πpiKL-IL stops improving. Note that the anchor policy is always the same human behavior-cloned policy to prevent the final policy from drifting away from human conventions.
The algorithm is presented here in the iterative form. However, if computational resources permit, it can be formulated as an asynchronous RL algorithm similar to AlphaZero (Silver et al., 2018) where πpiKL-IL is constantly trained with data from a replay buffer while many parallel workers generate games with piKL-LBS and add them to the buffer.
4.2 PIKL-BR FOR A HUMAN-LIKE BEST RESPONSE
A popular approach to human-AI coordination is to train a best response to a human model. This BR training is similar to standard single-agent RL in POMDP settings as the partners are fixed policies and thus can be viewed as part of the environment. In practice, this method has a few issues due to the imperfection of the human model as well as the overfitting problem in RL.
Given a normally distributed dataset in which the majority of the humans have intermediate skill levels, the vanilla behavioral-cloning model often converges to an intermediate average score in self-play. Moreover, as observed in Jacob et al. (2022), BC often nontrivially underperforms even the average of the players it is trained on. This can make it hard for the BR agent to learn the true best response to stronger-than-average players, or even to average players. We can address this problem by training a BR πpiKL-BR against the final piKL-IL policy πpiKL-IL instead of the original human behavior cloning policy πBC.
Similar to how single-agent RL can overfit to its exact training environment, an RL best response may overfit to its fixed neural partner, including finding unusual or out-of-distribution actions that happen to cause its partner to perform slightly better actions but that might not generalize to actual human players. In this case, instead of greedily picking an action that has slightly higher return as in normal RL, it would be better to err on the side of what humans tend to do in order to remain in-distribution. To address these issues, we propose to add piKL regularization during BR training.
Specifically, we train a policy πpiKL-BR to be a best response to πpiKL-IL via Q-learning, but we modify the Q-learning update as
Q(τ it , at)← rt(τt, a) + γ ·Q(τ it+1, a′t+1), (3) where a′t+1 = argmax
a [Q(τ it+1, a) + λ · log πBC(τ it+1, a)], (4)
and where the exploration policy is ϵ-Greedy[Q(τ it , a) + λ · log πBC(τ it , a)].The difference from the normal Q-learning is highlighted in red. At test time, ϵ is set to 0. The λ here can be set to a smaller value than that in piKL-IL because the main purpose is no longer modeling human moves but rather regularization and tie-breaking when multiple actions have small differences in expected return.
It is worth noting that if we run piKL-IL with the same small λ for many iterations, then the final πpiKL-IL with the small λ input converges to the same policy as piKL-BR. PiKL-BR is the amortized model-free version of piKL-IL and this step can be omitted if there are enough resources to run piKL-IL for enough iterations with additional λs.
4.3 PIKL-LBS FOR ROBUSTNESS AGAINST OOD ERRORS
The πpiKL-BR policy is a strong human-like policy that performs well with piKL-IL and humans who play similarly. When playing with a diverse group of human players in real life, however, it may suffer from out-of-distribution (OOD) errors when encountering trajectories that have low probability under training distributions. The actions produced by πpiKL-BR on OOD input sequences can be arbitrarily bad as the neural network has never been trained on such data.
However, search or other model-based planning algorithms can mitigate this problem by avoiding the most devastating mistakes, because it often takes only a few steps of simulated rollouts to directly observe the negative outcomes caused by those mistakes. Inspired by this observation, we run piKLLBS at test time on top of πpiKL-BR. We use πpiKL-BR for both of πanc and πroll in Eq. 2 when it is our turn, and assume the partner acts according to πpiKL-IL on their turn. The belief model is trained on data generated by cross-play between piKL-BR and piKL-IL. Since the main purpose of this step is to avoid catastrophic OOD errors that are usually associated with substantially lower Q-values, we can set λ high so the search policy stays close to πpiKL-BR in situations when the Q-values do not substantially differ.
5 EXPERIMENTAL SETUP
We implement and test our method in the Hanabi benchmark environment (Bard et al., 2020). In this section, we introduce the game rules of Hanabi, as well as how we implement piKL3 and a best response to πBC (BR-BC) baseline.
Hanabi is a 2 to 5 player card game. In this paper we use the standard 2-player version. The deck consists of five color suits and each suit has ten cards divided into five ranks with three 1s, two 2s, two 3s, two 4s and one 5. At the beginning, each player draws five cards from the shuffled deck. Players can see other players’ cards but not their own. On each turn, the active player can either hint a color or rank to another player or play or discard a card. Hinting a color or rank, will inform the recipient which cards in their hand have that specific color/rank. Hinting costs an information token and the team starts with eight tokens. The goal of the team to play exactly one card of each rank 1 to 5 of each color suit, in increasing order of rank, The order of plays between different color suits does not matter. A successful play scores the team one point while a failed play one costs one life. If all three lives are lost, the team will get 0 in this game, losing all collected points. The maximum score is 25 points. The team regains a hint token when a card is discarded or when a suit is finished (playing all 5 of a suit successfully). The player draws a new card after a play or discard move. Once the deck is exhausted, the game terminates after each player makes one more final move.
We acquire a dataset of roughly 240K 2-player Hanabi games from BoardGameArena1 to train the human policy πh. The dataset contains all the games played in a certain period on that online platform and we do not perform any filtering. The dataset is randomly split into a training set of 235K games, a validation set of 1K games and a test set of 4K games. The average score of games in the training set is 15.88. The policy πθ is parameterized by a Public-LSTM neural network (See Hu et al. (2021b) or Section 3.2). The policy is trained to minimize the cross-entropy loss L(θ) = −Eτ i∼D ∑ t πθ(a i t|τ it ). Note that it treats the AOH of each player τi as a separate data point for training. Similar to prior works (Hu et al., 2021c), we apply color shuffling Hu et al. (2020) for data augmentation. Every time we sample τi ∼ D, we generate a random permutation f of the five colors, e.g. f : a b c d e→ b d c a e, and apply f to both the input and the target of the training data. This model is trained with Adam (Kingma & Ba, 2014) optimizer until the prediction accuracy on the validation set peaks. The converged πh gets 19.72± 0.10 in self-play and 63.63% in prediction accuracy on the test set. In evaluation, we take the argmax from the policy instead of sampling, which also explains why it achieves higher average score than the training set.
PiKL-LBS requires a learned approximate belief model. In Hanabi, the belief model takes the same AOH τ i as the policy and returns a distribution over player i’s own hand. The hand consists of 5
1en.boardgamearena.com
cards and we can predict them sequentially from the oldest to the newest based on the time they are drawn. The belief network ϕ consists of an LSTM encoder to encode sequence of observations and an LSTM decoder for predicting cards autoregressively. Note that the belief is a function of partner’s policy. The belief for a given policy π partnering with ρ is trained with cross-entropy loss L(ϕ) = −Eτπ∼D(π,ρ) ∑ t ∑ j log pϕ(cj |τπt , c1, · · · .cj−1), where cj is the j-th card in hand to predict. D(π, ρ) is an infinite data stream generated by cross-play using π and ρ and τπ ∼ D means that we only use data from π’s perspective for training. In piKL-IL, we use π = ρ = πroll.
We set P (λ) to be a uniform mixture of truncated Gaussian distributions to model players of diverse skill levels. Specifically, we use Gaussian distributionsN (µ, σ2) truncated at 0 and 2µ with (µ, σ) = (1, 1/4), (2, 2/4), (5, 5/4), (10, 10/4) and each Gaussian is sampled with equal probability. We generate d = 250K games in each iteration to train the new policy and we find that one outer iteration (k = 1 in Algo. 1) is sufficient to achieve good performance in Hanabi. In every LBS step, we perform M = 10K Monte Carlo rollouts evenly distributed over |A| legal actions. We sample M/|A| valid private hands from the belief model to reset the simulator for rollouts. Invalid sampled hands are rejected. With this setting, each game with 2 player running piKL-LBS independently takes roughly 5 minutes with 1 GPU and we use 500 GPUs in parallel for 42 hours to generate the entire dataset. To better imitate policies under different λs, we feed the µ of the Gaussian distribution from which the λ is sampled to the policy network in the form of a one-hot vector concatenated with the input. The self-play performance of the piKL-IL model conditioning on different λ input is shown in the top row of Table 1. Clearly, piKL-IL performs significantly better than πh and the score increases as regularization λ decreases.
The BR is trained under a standard distributed RL setup where many parallel workers generate data with cross-play between the training policy and the fixed IL policy. The generated data is added into a prioritized replay buffer Schaul et al. (2015) and the training loop samples mini-batches of games to update the policy with TD errors. We use the same Public-LSTM architecture for the BR policy as it will also be used in piKL-LBS at test time. The BR policy explores with ϵ-greedy to a distribution of ϵ sampled every new game while the IL policy does not explore but it samples a new λ input from {1, 2, 5, 10} every game. The λ in Eq. 4 is set to 0.1. The cross-play performance between the converged piKL-BR and piKL-IL is shown in the bottom row of Table 1. As expected, piKL-BR is better at collaborating with piKL-IL than piKL-IL itself and the gap shrinks as the regularization λ decreases. The reasons are that piKL-BR is trained with lower regularization and RL can optimize for multi-step best response while search can only optimize for one step.
We run piKL-LBS on the piKL-BR policy with high regularization λ = 2. The search assumes that our blueprint is πbp and our partner always follows πIL, the final output policy of piKL-IL. To avoid predicting the λ input for partner model πIL, we replace it with an imitation learning policy π′IL trained on the same dataset as πIL but without the λ input. The belief model is trained the same way as in piKL-IL but with π = πBR and ρ = π′IL in D(π, ρ). The number of Monte Carlo rollouts per step is reduced to 5K to make it faster and suitable for real world testing.
Finally, we train an unregularized λ = 0 best response to the vanilla behavioral clone policy πBC as our baseline. This agent achieves 23.19 ± 0.03 in cross-play with πBC in convergence. This score is quite high considering that its partner πBC is much worse than the piKL-IL policy, indicating that the unreguarlized BR may be overfitting to the imperfect human model.
6 RESULTS
We carry out two large scale experiments with real humans to evaluate piKL3. The first experiment focuses on ad-hoc team play with a diverse group of players without any prior communication (zeroshot). In the second experiment, we invite a group of expert players to play multiple games with piKL3 and the BR-BC baseline in alternating order to further differentiate the gap between them.
The experiments are hosted on our customized version of the open sourced Hanab.Live platform2. The modified platform disables chat, observe and replay functionalities. Additionally, participants cannot create games themselves nor invite others to form a team. All games are created automatically following the design of the experiments below. We send each player an instruction document of the platform to make them familiar with the UI in advance.
2https://github.com/Hanabi-Live/hanabi-live
6.1 AD-HOC TEAM PLAY
In the first experiment, we recruited players with different skill levels from diverse sources. We posted invitations on the board game Reddit, the forum of BoardGameArena, Twitter and Facebook Ads, as well as on two popular Discord channels where enthusiasts discuss conventions and organize tournaments. This group of players are referred to as testers. A $40 or $80 gift card was sent to the testers who successfully completed the required games to encourage participation. Meanwhile, we recruited a group of expert human players from a well-known Discord group to study how well humans can do when playing with unknown partners. The experts have all played Hanabi for more than 500 hours. The experts were paid proportional to the time they spend waiting for and playing with the testers. Each tester signed up for a 45-60 minute time slot. During their session, they were automatically matched with the BR-BC baseline agent, the piKL3 agent and one of the available experts in random order. Usernames of all players including AI agents were randomly decided, to maintain anonymity of the participants and of which players were the experts or the agents. Both AI agents sampled a sleep time t proportional to the entropy of softmax(Q) and waited for at least t seconds before sending the action. This further helped to hide the identity of AI agents and to mitigate the potential side effect that piKL3 and the BR-BC need different amounts of time to compute an action.
The results are shown in Table 2. From the overall result in the first row, we see that both BR-BC and piKL3 outperformed human experts in this task, indicating that playing with a diverse range of players in the zero-shot ad-hoc setting is challenging for humans. Still, it is worth noting that the 2-player no-variant version of Hanabi is not a particularly popular variant within the advanced Hanabi community and many people mostly play with people they know so that they can discuss and perfect strategies after games.
We let testers self-identify as one of four skill levels spanning from newcomer who has just learned the rules to expert who has played extensively. The results for each skill level group together with the number of players from each group are shown in the table as well. Although the standard errors are large due to the small number of games within each group, we can see a trend that the AI agents generally worked better with non-expert players, while experts have a clear lead when collaborating with other experts. This is likely because experts’ behaviors are more predictable and the community has converged to a few well-known convention sets that are easy to identify for the experts who follow them closely in forums and discussion channels. It might also reflect the fact that our training data matches a more diverse pool of players with fewer experts in it, since those experts tend to play on sites other than BoardGameArena, which is our only source of training data.
The difference between BR-BC and piKL3 in this set of results is not significant given the standard errors. Unfortunately, it is challenging to collect more games in this setting due to both the difficulty to recruit enough people with good intent and the logistic overhead to manage anonymous humanhuman matches.
We hypothesized that piKL3 might work better with experts thanks to its abilities to model stronger human players and to be more robust against OOD errors. We design a different experiment to verify this in the next section.
6.2 REPEATED GAMES WITH EXPERTS
In this experiment, we recruited a group of expert players to play with the two AI agents in alternating order for a maximum of 20 games. Each expert started randomly with one of the agents and switched to the other one after each game. They were aware of the matching rule and which AI was in their current game. As an incentive to do well, the players were compensated with $0.6 for every point they got. A terminated game with all life token lost counted as 0 points.
We collected 111 games for the BR-BC agent and 113 games for the piKL3 agent. The numbers are slightly different for the two agents because some games terminated unexpectedly due to platform related technical reasons. The average score and the percentage of perfect games for each agent are shown in Table 3. A perfect game is where the team achieves full score. Although the improvement may seem small numerically, the mechanics of Hanabi makes it increasingly difficult to improve as the score gets closer to a perfect 25. In RL training, for example, learning a 20-point policy from scratch takes roughly the same amount of time and data as improving that policy to a 21-point one.
piKL3 outperformed the BR-BC in terms of both average score and percentage of perfect games. Although the statistical significance of both results is somewhat limited given the amount of data p = 0.097 and p = 0.058, respectively, both are strongly suggestive and consistent with our initial hypothesis. It is also encouraging to see that piKL3 can achieve more perfect games with the experts. Experienced human players are particularly excited about perfect games as they are often significantly harder than getting 23 or 24 in a given deck.
Due to the limitation on budget and the overhead of managing experiments with humans, we were unable to collect enough games to perform ablations for each component of piKL3, nor to include more baselines. In this experiment, we focused on demonstrating the effectiveness of this combination compared to the popular BR-BC baseline. In practice, researchers may use any combination of the three components of piKL3 based on the properties and challenges of their specific domains.
Most existing works targeting human-AI coordination in Hanabi (Hu et al., 2021c; Cui et al., 2021a) do not directly use human data. They get around 16 points with a similar but slightly stronger πBC while piKL3 and BR-BC get around 23. Therefore, despite being interesting, it will not be a fair comparison to include them. Previously, Hu et al. (2020) also tested their agent with human players. However, these results are not directly comparable as they use a different scoring method that keeps all the points when losing all life tokens. Additionally, the population of the human testers have a profound impact on the numerical results.
7 CONCLUSION AND FUTURE WORK
We present piKL3, a three-step algorithm centered around the idea of regularizing search, imitation learning and reinforcement learning towards a policy learned from human data. We performed two large-scale experiments with human players in the Hanabi benchmark to demonstrate the effectiveness of this method and its superiority over the popular baseline of training an ordinary best response to a plain imitation policy. We also for the first time report human experts’ perhaps surprisingly low performance on zero-shot ad-hoc team play with a diverse population.
The main limitation of this method is that it requires large amounts of data to learn the initial human policy used as anchor and blueprint in piKL search. Therefore, an interesting future direction is to extend this method onto more domains, especially ones with less high quality human data. Another direction would be to create personalized regularization that adapts to each individual player in repeated games in an attempt to better model them. | 1. What is the focus of the paper regarding training RL agents to collaborate with humans in Hanabi games?
2. What are the strengths of the proposed approach, particularly in using piKL as the human policy?
3. Do you have any concerns or minor questions about the method, such as capturing expertise levels or conducting ablation studies?
4. How does the reviewer assess the generalizability of the proposed method to other domains, and what adaptations might be necessary?
5. Are there any related works that the reviewer thinks should be considered and compared to, such as Wang et al.'s Co-GAIL?
6. How would you rate the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper presents a framework for training RL agents to collaborate with human players in 2-player Hanabi games. The key challenge of training such a policy is that human data is finite and fixed, which means that there needs to be a human-like behavior policy for the RL to learn to collaborate with. Prior work shows that such human-like policy trained with naive behavior cloning may make mistakes that can be exploited by the RL agent, resulting in the RL agents exhibiting strategies that don't mesh well with real human players. The paper proposes to use piKL, a prior search-based method that generates strong yet human-like plays, as the human policy to train the RL policies with. On the side of training the human policy, the method introduces (1) an iterative training scheme to improve piKL (the human policy) by feeding its successful play back to its dataset and (2) a way to learn human policies with different expertise levels by tuning the regularization strength. On the side of RL training with the human policy in-the-loop, to prevent RL overfitting to the human policy, the method proposes to further regularize the trained human policy by preferring “human-like” actions instead of greedily choosing the best move. Finally, the method also proposes to augment the RL policy with a search-based strategy to guard against catastrophic failure caused by out-of-distribution moves by real human players. The method is evaluated primarily via playing with real human players. The authors built a customized game server and solicited players of varying expertise level to play against the trained agent. The paper shows that the proposed method is able to outperform the RL policies trained with naive BC-based human policies.
Strengths And Weaknesses
Overall I quite like the spirit of this paper — for human-machine collaboration, human data is inherently scarce, especially for methods that rely on massive simulation and trial-and-error imitation learning. Training a human-like policy from a fixed set of human demonstration to generate reactive response for RL learning seems to be a good strategy. The proposed method is also conceptually simple — use a constrained BC method to plug “holes” in the human policy to prevent the RL learner from exploiting these errors to develop un-human behaviors. Finally, the scale of human study is laudable.
I don’t have major concerns. A few minor ones:
Does \lambda truly capture the level of expertise in eq.2? Intuitively it’s an interpolation between “optimal play” and “human-like” play instead of different expertise levels. Besides, how much does training the policy against these policies matter in practice? There doesn’t seem to be an ablation study on this.
The general recipe for learning human-like behavior policy to train RL collaborators seems to be general and can be applied to other domains. The authors also seem to agree with this point in the “future work” section. The authors should discuss which aspect of the proposed method needs to be adapted in order to accommodate new domains.
A piece of related work that’s missing from the paper is Wang et al. [1]. They explored a similar venue of idea, including elements such as learning a human-like behavior policy to train RL collaborators and modeling diverse collaboration strategies, albeit in a robotics domain. It’d be great if the authors could compare and contrast their work with [1].
[1] Co-GAIL: Learning Diverse Strategies for Human-Robot Collaboration, Wang et al., 2021
Clarity, Quality, Novelty And Reproducibility
Quality: great execution of a conceptually simple idea. Impressive empirical evaluation. Clarity: great writing. Originality: the problem setting is rarely studied and seems to be important for the future research of human-AI collaboration. |
ICLR | Title
Causal Knowledge Transfer from Task Affinity
Abstract
Recent developments in deep representation models through counterfactual balancing have led to a promising framework for estimating Individual Treatment Effects (ITEs). While Randomized Control Trials are vital to the understanding of causal effects, they are sometimes infeasible, costly, or unethical to conduct. Here, we focus on transferring the causal knowledge acquired in prior experiments to new scenarios where only limited data is available. We first provide regret bounds on the counterfactual loss and ITE error of the target task indicating the transferability of causal knowledge. We also observe that the absolute values of ITEs are invariant under the action of the symmetric group on the labels of treatments. Given this invariance, we propose a symmetrized task distance for calculating the similarity of a target scenario with those encountered before. The aforementioned task distance is then used to transfer causal knowledge from the closest of all the available previously learned tasks to the target scenario. Empirical studies are provided for various datasets demonstrating that the proposed symmetrized task distance is strongly related to the estimation of the counterfactual loss. Our results indicate that transferring causal knowledge reduces the amount of required data by up to 95% when compared to training from scratch.
N/A
Recent developments in deep representation models through counterfactual balancing have led to a promising framework for estimating Individual Treatment Effects (ITEs). While Randomized Control Trials are vital to the understanding of causal effects, they are sometimes infeasible, costly, or unethical to conduct. Here, we focus on transferring the causal knowledge acquired in prior experiments to new scenarios where only limited data is available. We first provide regret bounds on the counterfactual loss and ITE error of the target task indicating the transferability of causal knowledge. We also observe that the absolute values of ITEs are invariant under the action of the symmetric group on the labels of treatments. Given this invariance, we propose a symmetrized task distance for calculating the similarity of a target scenario with those encountered before. The aforementioned task distance is then used to transfer causal knowledge from the closest of all the available previously learned tasks to the target scenario. Empirical studies are provided for various datasets demonstrating that the proposed symmetrized task distance is strongly related to the estimation of the counterfactual loss. Our results indicate that transferring causal knowledge reduces the amount of required data by up to 95% when compared to training from scratch.
1 INTRODUCTION
One of the most remarkable characteristics of humans is their ability to transfer causal knowledge learned in a scenario to other similar situations. It is highly desirable for neural networks to have the same ability because of their numerous potential applications. For instance, mutations of old viruses often necessitate the development of new vaccines for treatment. To study the effect of new vaccine candidates, researchers need to collect data from randomized control trials, which is timeconsuming and expensive (Kaur & Gupta, 2020). If the mutated viruses can be related to old ones by a measure of similarity, then the effects of vaccine candidates can be quickly calculated based on this similarity with a small amount of data collected for the new scenario. In other words, transfer learning methods can help the research on the effects of various treatments (e.g., applications in medicines, personal training, social policy) progress much faster (Ebbehoj et al., 2022).
Recently, there has been significant progress in transfer learning, especially in computer vision and natural language processing applications (Wang & Deng, 2018; Alyafeai et al., 2020; Pan & Yang, 2010; Zhuang et al., 2021). While this is very promising, a challenge for transferring causal knowledge arises from statistical learning models’ vulnerability to non-causal correlations. For example, camels and horses often exist in images with different background colors, and a classifier may learn to use these colors to classify these objects (Arjovsky et al., 2019; Geirhos et al., 2019; Beery et al., 2018). A more critical challenge for transferring causal knowledge is that, in practice, the performance of the trained model for estimating ITEs can never be computed. This is because counterfactual data can never be collected as shown in Figure 1. This problem is known in the literature as the fundamental problem of causal inference (Rubin, 1974) and (Holland, 1986). For example, to compute the effect of vaccination on an individual at some given time, she/he must be both vaccinated and not be given the vaccine, which is obviously impossible. This contrasts with conventional supervised learning problems, where practitioners often use a separate validation set to estimate the true accuracy.
The aforementioned challenges imply that much attention must be paid to selecting the appropriate source model to transfer from in causal knowledge transfer. Additionally, the similarity of scenarios must be calculated using a distance related to variations of counterfactual loss between scenarios. This motivates our work in this paper, where we propose a task distance between causal inference scenarios. The task distance is then used for transferring causal knowledge, as shown in Figure 2. Our contributions can be summarized as follows:
1. For causal transfer learning scenarios, we establish new (to the best of our knowledge) regret bounds for the learning of counterfactual outcomes and ITEs for target tasks. These bounds prove the feasibility of transferring causal knowledge.
2. We observe a special property (symmetry) of causal inference tasks. Specifically, the absolute value of ITEs must be invariant to relabeling the treatment groups under the action of the symmetric group. Subsequently, we propose an intuitively appealing symmetrized Fisher task distance for which this property holds. While we construct the proposed task distance to satisfy this property mathematically, we also provide empirical evidence that it successfully lends itself to this symmetry in Section 5.3.
3. We provide both theoretical (e.g., Theorem 4) and empirical evidence (e.g., Figure 3) supporting the relevance of the symmetrized Fisher task distance to transferring causal knowledge. Through extensive experiments, we demonstrate that the proposed task affinity is highly correlated with the loss in estimating counterfactuals (not measurable in practice).
4. We present a representative set of causal inference datasets suitable for studying causal knowledge transfer. Some of these are well-established datasets in the literature, while others are derived from known causal relations in social sciences, physics, and mathematics.
5. We provide empirical evidence based on the above datasets that our methods can compute the ITEs for the target task with significantly fewer (up to 95% reduction) data points compared to the case where transfer learning is not performed.
2 MATHEMATICAL BACKGROUND
We first establish the notation and briefly review the required mathematical background.
2.1 CAUSAL INFERENCE
Let X ∈ X ⊂ Rd be the covariates (features), T ∈ {0, . . . ,M} be the treatment, and Y ∈ Y ⊂ R be the factual (observed) outcome. For every j ∈ {0, . . . ,M} we define Yj to be the Potential Outcome that would have been observed if only treatment T = j, j ∈ {0, 1, · · · ,M} was assigned. For example, in the medical context, X is the individual information (e.g. weight, heart rate, etc), T is the treatment assignment (e.g., t = 0 when the individual didn’t receive a vaccine, and t = 1 where he/she did), Y is the outcome (e.g mortality data). A causal inference dataset is given by a set of factual observations DF = {(xi, ti), yi}Ni=1, where N is the number of samples. We present our results for M = 1 (binary case) in the sequel. However, our approach immediately applies to any positive integer M < ∞. In the binary case, the individuals who received t = 0 (respectively t=1) are denoted by the control group (respectively the treatment group). Definition 1 (ITE). The Individual Treatment Effect also referred to as the Conditional Average Treatment Effect (CATE) (Imbens & Rubin, 2015), is defined as:
∀x ∈ X , τ(x) = E[Y1 − Y0|X = x] (1)
We assume that our data generation process respects overlap (i.e. 0 < p(t = 1|x) < 1 for all x ∈ X ) and conditional unconfoundedness (i.e. (Y 1, Y 0) ⊥⊥ T |X) (Robins, 1987). These assumptions are sufficient conditions for the ITE to be identifiable (Imbens, 2004). We also assume that a true underlying function f(x, t) describes the causal relationship. By definition τ(x) = f(x, 1)−f(x, 0). Let f̂(x, t) denote a hypothesis that estimates the true function f(x, t). Thus, the ITE function can then be estimated as τ̂(x) = f̂(x, 0)−f̂(x, 0). We let lf̂ (x, t, y) denote a loss function that quantifies the performance of f̂(·, ·). A possible example is lf̂ (x, t, y) = (y − f̂(x, t)) 2 (L2 loss) .
Definition 2 (Factual Loss). For a hypothesis f̂ and a corresponding loss function lf̂ we define the factual and counterfactual losses respectively as
ϵF (f̂) = ∫ X×{0,1}×Y lf̂ (x, t, y) p(x, t, y)dxdtdy (2)
We also define the factual loss for the treatment (t = 1) and control (t = 0) groups respectively as:
ϵt=1F (f̂) = ∫ X×Y lf̂ (x, 1, y) p(x, y|t = 1)dxdy (3)
and ϵt=0F (f̂) = ∫ X×Y lf̂ (x, 0, y) p(x, y|t = 0)dxdy (4)
Definition 3 (Counterfactual Loss). The counterfactual loss is defined as (Shalit et al., 2016)
ϵCF (f̂) = ∫ X×{0,1}×Y lf̂ (x, t, y) p(x, 1− t, y)dxdtdy (5)
Intuitively, the counterfactual loss corresponds to the expected loss value in a parallel universe where the roles of the control and treatment groups are exchanged. Definition 4. We define the Expected Precision in Estimating Heterogeneous Treatment Effect (PEHE) (Hill, 2011) as
εPEHE(f̂) = ∫ X (τ̂(x)− τ(x))2 p(x)dx. (6)
The value εPEHE is often used as the performance metric for estimation of ITEs as in (Shalit et al., 2016),(Hill, 2011), and (Johansson et al., 2016). Small factual and counterfactual losses are sufficient conditions for causal models to have good performance (i.e., low εPEHE) (Shalit et al.,
2016). Intuitively, this measures if a model has a good performance in predicting the effect both when the treatment is administered or not. Lower εPEHE also implies that the model is good for predicting the ITEs. We note that the above measures of performance are not directly accessible in causal inference scenarios, because the calculation of the ground truth ITE values requires access to counterfactual values. In this light, we may resort to selecting a hypothesis that optimizes an upper bound instead, such as the one given in the following section (see Equation 8).
2.2 TARNET AND COUNTERFACTUAL REGRESSION
TARNet (Shalit et al., 2016) has proven to be a successful framework for counterfactual balancing to estimate ITEs. It is defined as a pair of functions (Φ, h) where Φ : Rd → Rl is a representation function of the features and h : Rl × {0, 1} → R is a function learning the two potential outcomes functions in the representation space. The hypothesis learning the true causal function is: f̂(x, t) = h(Φ(x), t). We denote the loss function lf̂ by l(Φ,h). TARNet uses integral probability metric (IPM) defined as
IPMG(p, q) := sup g∈G ∣∣∣∣∫ S g(s)(p(s)− q(s))ds ∣∣∣∣ , (7) where the supremum is taken over a given class of functions G to measure the distance between distributions. It is a consequence of Kantorovich-Rubinstein duality Villani (2009) that IPM reduces to 1-Wassertein distance when G is the set of 1-Lipschtiz functions as is the case in our numerical experiments.
TARNet (Shalit et al., 2016) estimates the counterfactual outcomes by minimizing:
L(Φ, h) = 1 N N∑ i=1 wi · l(Φ,h)(xi, ti, yi) + α · IPMG ( {Φ (xi)}i:ti=0 , {Φ (xi)}i:ti=1 ) (8)
where wi = ti2u + 1−ti 2(1−u) , and u = 1 N ∑N i=1 ti. The parameter α is referred to as the balancing weight since it controls the trade-off between the similarity of the representations in the latent domain, and the performance of the model on the factual data.
3 TRANSERABILITY OF CAUSAL KNOWLEDGE
In this section, we use superscripts Ta and Sr to denote quantities related to target and source task respectively. Suppose that we have a model (ΦSr, hSr) trained on a source causal inference task. We apply the source model to a different target task. For notational simplicity, we denote P (Φ(X)|T = t) by P (Φ(Xt)) for t ∈ {0, 1}. We are interested in the performance of a welltrained source model when applied to a target task, i.e.
ϵTaPEHE(Φ Sr, hSr) = ∫ x∈X ( τTa(x)− [hSr(ΦSr(x), 1)− hSr(ΦSr(x), 0)] )2 P (XTa = x)dx
where τTa is the individual treatment effect function of the target, Φ is the representation learning function, and h is the potential outcomes hypothesis. While it is difficult to estimate, this error can have an upper bound that only involves obtainable quantities if we make reasonable assumptions about the relationship between the source and target task (defined in the Assumption 4 below). We make the following assumptions throughout this section:
1. Assumption 1: The loss function is non-negative, i.e. ℓTaΦ,h(x, t, y) ≥ 0 for all (x, t, y) ∈ (X × {0, 1} × Y),
2. Assumption 2: Φ is injective (thus Ψ = Φ−1 exists on Im(Φ)) (We borrow this assumption from (Shalit et al., 2016)),
3. Assumption 3: There exists a real function space G on R = Im(Φ) and a constant BTaΦ such that the function r 7→ 1
BTaΦ · ℓTaΦ,h(Ψ(r), t, y) ∈ G.
4. Assumption 4: Causal Knowledge Transferability Assumption: There exists a function class G′ on Y such that y 7→ lΦ,h(x, t, y) ∈ G′ and IPMG′(P (Y Srt |x), P (Y Tat |x)) ≤ δ for t ∈ {0, 1}.
Note that the causal knowledge transferability assumption implies that the outcome distributions (causal effects) of treatment t in source and target tasks need to be similar in order for transfer learning to be beneficial.
Our main Theorem guarantees that causal knowledge can be transferred and is proved using two Lemmas that are stated below. These lemmas provide upper bounds on the factual and counterfactual losses for transferring causal knowledge and may be by themselves of independent interest. The proofs of these Lemmas and that of the Theorem are provided in the Appendix 8.5. Lemma 1. (Factual Loss of Source Model on Target Task) Suppose that Assumptions 1-4 hold. The factual losses of any model (Φ, h) on source and target task satisfy:
∀t ∈ {0, 1}, ϵTa,tF (Φ, h) ≤ ϵ Sr,t F (Φ, h) +B Ta Φ · IPMG(P (Φ(XTat )), P (Φ(XSrt ))) + δ
Lemma 2. (Counterfactual Loss of Source Model on Target Task) Suppose that Assumptions 1-4 hold. The counterfactual losses of any model (Φ, h) on source and target task satisfy:
ϵTaCF (Φ, h) ≤ϵ Sr,t=1 F (Φ, h) + ϵ Sr,t=0 F (Φ, h S)
+BTaΦ · IPMG(P (Φ(XTa1 )), P (Φ(XSr1 ))) +BTaΦ · IPMG(P (Φ(XTa0 )), P (Φ(XSr0 ))) +BTaΦ · IPMG(P (Φ(XTa0 )), P (Φ(XTa1 ))) + 2δ
The above lemmas quantify the relationship between causality and transfer learning. In particular Lemma 2 bounds the inherently non-observable counterfactual loss by tractable quantities. Theorem 1. (Transferability of Causal Knowledge) Suppose that Assumptions 1-4 hold. The performance of source model on target task, i.e. ϵTaPEHE(Φ Sr, hSr), is upper bounded by:
ϵTaPEHE(Φ Sr, hSr) ≤2(ϵSr,t=1F (Φ Sr, hSr) + ϵSr,t=0F (Φ Sr, hSr)
+BTaΦSr · IPMG(P (Φ Sr(XTa1 )), P (Φ Sr(XSr1 )))
+BTaΦSr · IPMG(P (Φ Sr(XTa0 )), P (Φ Sr(XSr0 )))
+BTaΦSr · IPMG(P (Φ Sr(XTa0 )), P (Φ Sr(XTa1 )) + 2δ)
Theorem 1 implies that good performance on the target task is guaranteed if (1) the source model has a small factual loss (e.g., the first and second term in the upper bound) and (2) the distributions of the control and the treatment group features are similar in the latent domain (the rest three terms in the upper bound). This upper bound provides us with a sufficient condition for transfer learning in causal inference scenarios, indicating the transferability of causal knowledge. Please note that these regret bounds can be applied to any transfer learning framework that involves a pair of tasks.
4 SYMMETRIZED TASK AFFINITY FOR CAUSAL INFERENCE TASKS
While these regret bounds indicate the transferability of causal knowledge between any pair of causal inference tasks, they don’t provide a constructive way to choose the best source task to transfer from, when multiple source tasks exist. The order of performance of different models and that of their upper bounds are not necessarily the same. Hence, we propose a label-invariant task affinity that finds the closest source task. Moreover, this task affinity satisfies the symmetry property (see section 4.3) of causal inference tasks. Our new task affinity is built on the Fisher task distance (FTD). We first give a brief introduction to FTD, then we propose a symmetrized Fisher task distance for causal inference tasks.
4.1 TASK REPRESENTATION
The ordered pair of a causal task T and its dataset D = (X,T ) will be denoted by (T , D), where dataset D itself consists of pair of covariates and their assigned treatments.
We will mathematically formalize a sufficiently well-trained deep network representing a causal task-dataset pair (T , D) in the Appendix 8.7. From now on, we assume that all the previously trained models are sufficiently well-trained networks.
4.2 FISHER TASK DISTANCE
Here, we recall the definition of the Fisher Information matrix for a neural network, and well-defined Fisher task distance (Achille et al., 2019; Le et al., 2021b; 2022b). Definition 5 (Fisher Information Matrix). For a neural network Nθa with weights θa trained on data Da, a given test dataset Db and the negative log-likelihood loss function L(θ,D), the Fisher Information matrix is defined as:
Fa,b = ED∼Db [ ∇θL(θa, D)∇θL(θa, D)T ] = −ED∼Db [ H ( L(θa, D) )] , (9)
where H is the Hessian matrix, i.e., H ( L(θ,D) ) = ∇2θL(θ,D), and expectation is taken w.r.t the data. It can proved that Fisher Information Matrix is asymptotically well-defined (Le et al., 2022b). In practice, we approximate the above with the empirical Fisher Information matrix:
F̂a,b = 1 |Db| ∑ d∈Db ∇θL(θa, d)∇θL(θa, d)T . (10)
Here, the empirical Fisher Information Matrix is positive semi-definite because it is the summation of positive semi-definite terms, regardless of the number of samples. For completeness, we next review the task affinity score (Le et al., 2021b). Definition 6 (Task Affinity Score (TAS)). Let (Ta, Da) and (Tb, Db) respectively denote the source and target task-dataset pairs. Let Da = Dtra ∪ Dtea (respectively Db = Dtrb ∪ Dteb ) with Dtra (respectively Dtrb ) and D te a (respectively D te b ) be the training and test sets of dataset Da (respectively Db), where the training for Ta is performed using the source representation network Nθa . Consider the Fisher information matrix H ( L(θ,Da) ) of Nθa with test data D te a . Let Fa,a be the diagonal
matrix of absolute values of elements of major diagonal of H ( L(θ,Da) ) normalized to have unit trace. Let Fa,b be constructed in an analogous manner but using the training data Dtrb (instead of Dtea ). The TAS from the source task Ta to the target task Tb is defined as:
s[a, b] = 1√ 2 ∥∥∥F 1/2a,a − F 1/2a,b ∥∥∥ F
(11)
It can be proved that 0 ≤ TAS ≤ 1 where TAS = 0 denotes extreme similarity and TAS = 1 indicates extreme disimilarity. In Appendix 8.5, we prove under stringent assumptions that the order of TAS between candidate source tasks and the target task are preserved when a parallel universe experiment is performed in which the roles of the control and treatment groups are exchanged.
4.3 LABEL-INVARIANT TASK AFFINITY
Symmetry Property of Causal Inference Tasks Causal inference tasks can be considered as having multiple regression problems, one for each treatment group. Given a source task, if we alternate the treatment labels (i.e., 0 to 1 and 1 to 0), the treatment effect (i.e., E[Y1−Y0|X]) will be negated. Consequently, the unsymmetrized task distance (Le et al., 2022b) between the original task and the permuted task can be very large. However, the original model does not need to be retrained for transfer learning as we only need to permute the roles of output layers of the model to predict the individual treatment effects correctly for each group. In other words, the causal distance between these two permuted tasks must be zero. The following proposed label-invariant task affinity lends itself to this property of causal inference tasks.
Our causal inference tasks are represented by TARNet type networks. We also restrict to the case, where all causal tasks under consideration have the same number of treatment labels M (e.g., M = 2). Let (Ta, Da) (respectively (Tb, Db)) with Da = (Xa, Ta, Ya) (respectively Db = (Xb, Tb, Yb)) be the source (respectively target) causal inference tasks. Clearly Ta, Tb ∈ {0, 1, . . . ,M}. Consider the symmetric group SM+1 consisting of all permutations of labels {0, 1, . . . ,M}. For σ ∈ SM+1, let Tσ(b) denote the permutation of the target treatment labels under the action of σ. Let dσ =
1√ 2 ∥∥∥F 1/2a,a − F 1/2a,σ(b)∥∥∥ F , then
ssym[a, b] = min σ∈SM+1 (dσ)
is the label-invariant task affinity distance between causal tasks Ta and Tb (The pseudocode is provided in the Appendix 8.3). It follows from the above definition that the order of closeness of tasks under label-invariant task affinity closeness is robust to the architectural choice of the representation networks since task affinity distance has been shown to enjoy this property (Le et al., 2022b).
5 EXPERIMENTAL RESULTS
We first describe the datasets we have used for our empirical studies. Subsequently, we present empirical results about quantifying the gains of transfer learning for causal inference, demonstrating the strong correlation between the proposed task distance and the counterfactual loss, and showing that the proposed task distance identifies the symmetries within causal inference tasks.
5.1 CAUSAL INFERENCE DATASETS
We present a representative family of causal inference datasets suitable for studying causal knowledge transfer. Some of these are well-established datasets in the literature, while others are motivated by known causal structures in diverse areas such as social sciences, physics, health, and mathematics. Table 1 provides a brief description of the datasets used in our studies. A more detailed description is provided in Appendix 8.4.1. For each dataset, a number of corresponding causal inference tasks exist, which can be used to study transfer learning scenarios. Please note that we can only access the counterfactual data of the synthetic/semi-synthetic datasets (i.e., IHDP, RKHS, Movement, Heat). We are not in possession of the counterfactual data of the real-world datasets (i.e., Twins, Jobs).
5.2 COMPARISON OF PERFORMANCE WITH/WITHOUT TRANSFER LEARNING
Here we briefly discuss our experiments quantifying the impact of transferring causal knowledge on the size of required training data. In this experiment, we use Heat (Physics), Movement (Physics), IHDP, and RKHS datasets for which the counterfactual outcomes are available. We first fix a target causal inference task. For a wide range of balancing weights (α), we record the values of εPEHE for the training of the model from scratch while increasing the size of training datasets (at the end training process). In this process, the training datasets are slowly expanded such that smaller training sets are subsets of larger ones. We then report the minimum εPEHE achieved for each dataset size. For the Target task, we identify the closest source task and repeat the above process with a small amount of target task data. We then compare the performance with and without transfer learning to quantify the amount of data needed by transfer learning models to achieve the best possible performance without transferring causal knowledge. The results are summarized in Table 2, which demonstrates that transferring causal knowledge decreases the required amount of training data in this setting by a percentage between 75% and 95%.
5.3 TASK DISTANCE AND COUNTERFACTUAL LOSS
Here, we show empirically the strong correlation between task distance (which only uses available data) and counterfactual loss (which is impossible to measure perfectly except for synthetic datasets). We show in Figure 3 that for different balancing weights α (see Equation 8), the correlation between
task distance and counterfactual error on the IHDP, RKHS, Movement(Physics), and Heat(Physics) datasets for which counterfactuals are known. It is intuitively appealing and empirically observed that the task distance and the counterfactual loss have a strong correlation: the model of a source task has a smaller counterfactual loss on the target data if the target task is closer (in terms of the proposed task distance). Note that the points in Figure 3for different values of α (i.e., balancing weight) are extremely close. This shows that the proposed task affinity not only indicates counterfactual loss, but also is robust to change of hyper-parameters. This is a highly desirable property, especially in causal inference scenarios where no validation data can be accessed to cross-validate the hyper-parameters. Our numerical results for the Jobs and Twins datasets verify that the proposed task distance can capture the symmetries within causal inference problems. We flip treatment labels (0 and 1) with probability p (without any changes to the features and the outcomes) independently for each control and treatment data point. In Figure 4, we depict the trend of the symmetrized task distance between the original and the altered dataset by varying p, p ∈ [0, 1]. The symmetry of task distance is evident (with some deviation due to limited training data for calculating the task distance). The altered dataset with p = 1 is the closest to the original dataset (as it should be) since we have completely flipped the treatment assignments. The altered dataset with p = 0.5 is the furthest (as it should be), since we have randomly shuffled the control and the treatment groups. For all datasets, it can also be observed that the task distance trends are robust to variations in the balancing weight.
6 RELATED WORK
In the setting of transfer learning (Pan & Yang, 2010; Zhuang et al., 2021), prior learned models are used to increase the learning efficiency and decrease the required data. For instance, the parameters from a trained model may be used as initialization values for the target task. Many approaches in transfer learning (Thrun & Pratt, 1998; Blum & Mitchell, 1998; Silver & Bennett, 2008; Razavian et al., 2014; Finn et al., 2016; Fernando et al., 2017; Rusu et al., 2016) have been proposed, analyzed and applied in various machine learning applications. Transfer learning techniques inherently assume that prior knowledge in the selected source model helps learn a target task. In other words, these methods often do not consider the selection of the base task to perform knowledge transfer. Consequently, in some rare cases, transfer learning may even degrade the performance of
the model Standley et al. (2020). In order to avoid potential performance loss during knowledge transfer to a target task, task affinity (or task similarity) is considered as a selection method that identifies a group of closest base candidates from the set of the prior learned tasks. Task affinity has been recently investigated and applied to various domains, such as transfer learning (Zamir et al., 2018; Dwivedi & Roig, 2019; Achille et al., 2019; Wang et al., 2019), neural architecture search (Le et al., 2021a; 2022a; Le et al., 2021), few-shot learning (Pal & Balasubramanian, 2019; Le et al., 2022b), multi-task learning (Standley et al., 2020), and continual learning (Kirkpatrick et al., 2017; Chen et al., 2018). The related prior learned tasks are identified with similarity measures and then employed for knowledge transfer. Task affinity is inherently a non-commutative measure as it may be straightforward to transfer the knowledge from a more comprehensive task to a simpler task than the other way around (Le et al., 2021b).
While transfer learning and task affinity have been investigated in numerous application areas, their applications to causal inference have not been fully developed. Neyman-Rubin Causal Model and Pearl’s Do-calculus are two popular frameworks for causal studies based on different perspectives. A central question in these frameworks is determining conditions for identifiability of causal quantities such as Average and Individual Treatment Effects. Past work considered estimators for Average Treatment Effect based on various methods such as Covariate Adjustment (a.k.a back-door adjustment) (Pearl, 2009; Rubin, 1978), weighting methods such as those utilizing propensity scores (Rosenbaum & Rubin, 1983), and Doubly Robust estimators (Funk et al., 2011). With the emergence of Machine Learning (ML) techniques, more recent approaches to causal inference include the applications of decision trees(Wager & Athey, 2015), Gaussian Processes (Alaa & van der Schaar, 2017) and Generative Modeling (Yoon et al., 2018) to ITE estimation. In particular, deep neural networks have successfully learned ITEs and estimated counterfactual outcomes by data balancing in the latent domain (Johansson et al., 2016; Shalit et al., 2016). It is important to note that the transportability of causal graphs is another closely related field that has been well-studied in the causality literature (Bareinboim & Pearl, 2021). It studies transferring knowledge of causal relationships in Pearl’s do-calculus framework. In contrast, in this paper we are interested in transferring knowledge of Individual Treatment Effects from a source task to a target task in the Neyman-Rubin framework using representation learning.
7 CONCLUSION
In this paper, we provided theoretical analysis proving the transferability of causal knowledge and proposed a method for causal transfer learning based on a task affinity framework. To this end, we constructed a new task distance suitable for measuring the similarity of causal inference tasks. Given a new causal inference task, we transferred the causal knowledge from the closest available trained task. Extensive Simulations on a representative family of datasets provide empirical evidence demonstrating the gains of our method and the efficacy of the proposed symmetrized task distance. Reductions as much as 95% in the amount of required training data for new scenarios were observed.
8 APPENDIX
Here, we provide a simple example to help understand causal inference dataset, the pseudocode, the datasets description, the theorems, the proofs for those theorems, and other supplementary materials.
8.1 REPRODUCIBILITY STATEMENT
In the supplementary material, we have included our codes that implement TARNet and the proposed task distance.
8.2 CAUSAL INFERENCE: AN EXAMPLE
Let X ∈ X be the features (e.g., age, sex, etc..), the treatment variable T ∈ {A,B} be the indicator representing if the subject received vaccine A or B, and Y ∈ Y indicates the mortality outcome. The main challenge of causal inference arises from the absence of counterfactual observations. We do not observe the outcomes of individuals upon receiving treatment A if they have received treatment B (and vice versa). The subjects who received vaccine A may be significantly different from those who received treatment B. This is commonly called selection bias (e.g. elderly people are more likely to receive treatment A than young people). Thus estimating the counterfactual effects is challenging due to this unbalance between the treatment groups. Let f̂(x, t) be a hypothesis modeling the outcome for an individual x if he received treatment t. The factual loss is defined as
ϵF (f̂) = ∫ X×{A,B}×Y lf̂ (x, t, y) p(x, t, y)dxdtdy (12)
By Bayes rule, we can write the factual loss as
ϵF (f̂) = ∫ X×Y lf̂ (x,A, y) p(x, y|t = A)p(t = A)dxdy
+ ∫ X×Y lf̂ (x,B, y) p(x, y|t = B)p(t = B)dxdy
= p(t = A) ∫ X×Y lf̂ (x,A, y) p(x, y|t = A)dxdy
+ (1− p(t = A)) ∫ X×Y lf̂ (x,B, y) p(x, y|t = B)dxdy
= p(t = A)ϵt=AF (f̂) + (1− p(t = A)) ϵt=BF (f̂)
Where we define the factual loss for the group who received vaccine A to be
ϵt=AF (f̂) = ∫ X×Y lf̂ (x,A, y) p(x, y|t = A)dxdy (13)
Respectively, the factual loss for the group who received vaccine B is
ϵt=BF (f̂) = ∫ X×Y lf̂ (x,B, y) p(x, y|t = B)dxdy (14)
Let us now consider a parallel universe where the treatment assignments are flipped (those who received vaccine A receive vaccine B and vice versa). The performance of our hypothesis f̂ in this parallel universe is the counterfactual loss, defined as:
ϵCF (f̂) = ∫ X×{A,B}×Y lf̂ (x, t, y) p(x, 1− t, y)dxdtdy (15)
8.3 PESUDOCODE FOR SYMMETRIZED TASK DISTANCE
The pseudocode for our proposed task affinity is given in Algorithm 1.
Algorithm 1: Label-Invariant Task Affinity Score for Causal Inference Data: Source tasks: S = {(X1, T1, Y1), . . . , (Xm, Tm, Ym)}, Target task: (Xt, Tt, Yt) Input: TARNet models Nθ1 , Nθ2 , . . . , Nθm Output: TARNet model for the target task t
1 Function TAS(Xa, Ta, Xb, Tb, Nθa): 2 Compute Fa,a using Nθa with Xa, Ta 3 Compute Fa,b using Nθa with Xb, Tb 4 return s[a, b] = 1√ 2 ∥∥∥F 1/2a,a − F 1/2a,b ∥∥∥ F
5 Function Main: ▷ Find the closest tasks in S 6 for i = 1, 2, . . . ,m do 7 Train Nθi for source task i using (Xi, Ti, Yi) 8 Compute the distance from source task i to target task t: 9 s+i = TAS(Xi, Ti, Xt, Tt, Nθi)
10 Compute the distance from source task i to target task t′ where t′’s treatments are inverted treatments of t: 11 s−i = TAS(Xi, Ti, Xt, 1− Tt, Nθi) 12 Symmetrized distance: ssymi = min(s + i , s − i ) 13 return closest tasks: i∗ = argmin i ssymi
▷ Causal Knowledge Transfer
14 Finetune Nθ∗i with the target task’s data (Xt, Tt, Yt) 15 return Nθ∗i
8.4 DATASETS AND EXPERIMENTS DESCRIPTIONS
8.4.1 DATASETS
IHDP The IHDP dataset was first introduced by Hill (2011) based on real covariates available from the Infant Health and Development Program (IHDP), studying the effect of development programs on children. The features in this dataset come from a Randomized Control Trial, and the potential outcomes were simulated using Setting ”B” in (Hill, 2011), hence the word semi-synthetic. The dataset consists of 747 individuals (139 in the treatment group and 608 in the control group), each with 25 features. Hill generated the potential outcomes with Y0 ∼ N (exp(βT ·(X+W )), 1), where W has the same dimension as X with all values = 0.5 and Y1 ∼ N (βT (X+W )−ω, 1) with ω = 4. β is 25-element vector of regression coefficients randomly sampled from a categorical distribution with support (0, 0.1, 0.2, 0.3, 0.4) and respective probabilities µ = (0.6, 0.1, 0.1, 0.1, 0.1). We refer to the dataset generated according to these parameters as the base dataset.
We retain the base dataset and introduce 9 new settings according to Table 3 by varying µ and ω. We also generate 10 new datasets for each setting, each consisting of 747 individuals (139 in the treatment group and 608 in the control group) by running the same process but with different random samples of the aforementioned Gaussian distribution.
Jobs The original Jobs dataset (LaLonde, 1986) has 619 observations. The causal inference task is to learn the effect of participation/lack of participation in a specific professional training program (corresponding to receiving a treatment t = 1) at a time on the success in landing a job in the following three years. We generate a family of related datasets by randomly reverting the treatment assignments of the original dataset with various probabilities p ∈ [0, 1]. Specifically, to generate a dataset, we first choose a probability value p ∈ [0, 1], and then alter individuals (original) treatment assignment (i.e., 0 ↔ 1) with probability p. We choose values
p ∈ {0 = 0/9, 1/9, 2/9, 3/9, 4/9, 5/9, · · · , 9/9 = 1}. Clearly, p = 0 corresponds to the original dataset, and p = 1 corresponds to all reverted treatment assignments. We choose the original Jobs dataset (LaLonde, 1986) as the base dataset for our experiments, as discussed in Section 8.4.2.
Twins The Twins dataset was first introduced by Louizos et al. (2017) based on the collected data about twins’ births in the United States from 1989 to 1991. It is assumed that twins share significant parts of their features. We consider whether one of the twins was born heavier than the other as the treatment assignment and if he/she died in infancy (mortality) as the outcome. We divide the twins into two groups: In the treatment (respectively control) group, we consider the outcome for the heavier (respectively lighter) twin as factual. In both groups, the outcome for the remaining twin is assumed to be counterfactual.
We first construct a base dataset by selecting a set of 2000 pairs of twins from the original dataset (Louizos et al., 2017). Then, each element is assigned to the treatment group according to a Bernoulli experiment with the probability of success q = 0.75.
Next, the base dataset is used to generate more datasets. In an analogous manner to that of the Jobs dataset, we generate a family of related datasets by randomly reverting the treatment assignments of the base dataset (0 ↔ 1) with corresponding probabilities p ∈ {0, 0.1, 0.2, 0.3, 0.4, 0.5, · · · , 1}. For instance, to generate dataset i = 1, 2, · · · , 11, we let pi = (i− 1)/10 revert the individual treatment assignments in the base dataset Bernoulli experiment with probability of success pi. Clearly, p = 0 corresponds to the original dataset while p = 1 corresponds to all treatment assignments reverted.
RKHS We generate 100 Reproducing Kernel Hilbert Space (RKHS) datasets, each having 2000 data points. For each dataset, we start by generating the treatment and the control populations X1, X0 ∈ R4 respectively from Gaussian distributions N (µ1, I4) and N (µ0, I4). We sample µ1 ∈ R4 and µ0 ∈ R4 respectively according to Gaussian distributions N (e, I4) and N (−e, I4) where e = [1, 1, 1, 1]T is the all ones vector.
Subsequently, we generate the potential outcome functions f0 and f1 with a Radial Basis Function (RBF) kernel K(·, ·) as described next. Let γ0, γ1 ∈ R4 be two vectors sampled respectively from N (7e, I4) and N (9e, I4), and let λ ∈ N be sampled uniformly from {10, 11, . . . , 99, 100} For j ∈ {0, 1}:
1. We sample mj ∈ N according to Pois(λ) (e.g., the Poisson distribution with parameter λ),
2. For every i ∈ {1, . . . ,mj}, we sample xij according to N (γj , I4), and 3. The potential outcome functions fj , j = 0, 1 are constructed as fj(·) = ∑mj i=1 K(x i j , ·).
Given the potential outcome functions fj , j ∈ {0, 1}, the corresponding potential outcomes Y0 and Y1 are generated by:
Y0(x) = f0(x), for every x ∈ R4,
and Y1(x) = f1(x), for every x ∈ R4.
We will refer to the first constructed dataset in the above as the base dataset.
Note that in the above, all the generated potential outcome functions are in the same RKHS.
Heat (Physics) Consider a hot object left to cool off over time in a room with temperature T0. A person is likely to suffer a burn if he/she touches the object at time u.
The causal inference task of interest is the effect of room temperature T0 on the probability of suffering a burn. This family consists of 20 datasets; each includes 4000 observations with 2000 in each control and treatment group. The treatment in our setting is t = 1 when T0 = 5, and t = 0 when T0 = 25.
The treatment and control groups touching times are respectively sampled from two Chi-squared distributions χ2(5) and χ2(2) (intentionally in order to create artificial bias).
From the solution to Newton’s Heat Equation (Winterton, 1999) the underlying causal structure is governed by the equation
T (u) = C · exp(−ku) + T0 where T (u) is the temperature at time u and C, k > 0 are constants.
We let T0 = 25 and C = 75 for all the control groups in the datasets. Similarly, we let T0 = 5 and C = 95 for all the treatment groups in the datasets. We choose 20 values of k = {0.5, · · · , 2} uniformly spaced in [0.5, 2]. For each value of k, we generate a new dataset. The dataset corresponding to k = 0.5 is referred to as the base dataset.
Let T 0(u) and T 1(u) respectively denote the temperature at time u for the control and treatment groups. The potential outcomes Y0(u) and Y1(u) corresponding to the probability of suffering a burn at time t for respectively the control and treatment groups are given by
Yj(u) = max
( 1
75 (T j(u)− 25), 0 ) for j ∈ {0, 1}.
Movement (Physics) Consider a falling person in the air encountering air resistance. Opening her/his parachute can change the air resistance and control its descent velocity. The causal inference task of interest is the effect of the air resistance (e.g., with t = 1 or without parachute t = 0) on the object’s velocity at different times.
This family consists of 12 datasets. Each includes 4000 observations with 2000 in each treatment and control group. Here, the covariate is the time u, and the outcome is the velocity at time u. The treatment and control groups’ times are respectively sampled from two Chi-squared distributions χ2(2) and χ2(5) (intentionally in order to create artificial bias).
The underlying causal structure is governed by an ordinary differential equation (ODE) with the following analytical solution describing the velocity of a person at time u:
v(u) = g
C + (v0 −
g C )e−Cu (16)
where C = km , m and k are respectively the mass, and the air resistance constant, and g = 10 is the gravitational constant of earth. In the above v0 = v(0) is the initial velocity at time u = 0. We assume v0 = 0, corresponding to a free fall without initial velocity.
For the control group, we assume m = k = C = 1 and the potential outcome is calculated as Y0(u) = v(u) = 10 − e−u using the Equation 16. For the treatment groups, we vary m and k for different datasets with (5, 1), (5, 5), (5, 10), (5, 20), (10, 5), (10, 10), (10, 20), (20, 5), (20, 10), (20, 20), (50, 10), (50, 20). The potential outcomes Y1(u) is calculated from Equation 16. We have chosen the the dataset corresponding to (m, k) = (5, 1) as the base dataset.
8.4.2 DETAILS OF EXPERIMENTS
In this paper, we first create various causal inference tasks from the above families of datasets. For each family of datasets (e.g. IHDP, Jobs, Twins), the base task is created from its base dataset. Similarly, we construct the other tasks from the remaining datasets in that family. In order to study the effects of transfer learning on causal inference, we define the source tasks and the target tasks as follows:
• In the first experiment in Section 5.3, we choose the base task to be the source task and the other tasks to be the target tasks.
• In the second experiment in Section 5.2, we choose the base task to be the target task and the other tasks to be the source tasks.
8.5 PROOF OF LEMMAS AND THEOREMS
We will use the following known results (Shalit et al., 2016) for causal inference. The proofs for these results are given in (Shalit et al., 2016) .
For x ∈ X , t ∈ {0, 1}, with notational simplicity, we define
LTaΦ,h(x, t) = ∫ Y lh,Φ(x, t, y)P (Y Ta t = y|x)dy.
Theorem 2 (Bounding The Counterfactual Loss). Let Φ be an invertible representation with inverse Ψ. Let pt=iΦ = pϕ(r|t = i), i ∈ {0, 1} Let h : R× {0, 1} → Y be a hypothesis. Assume that there exists a constant BΦ > 0 such that for t = 0, 1, the function gΦ,h(r, t) := 1 BΦ · Lh,Φ(Ψ(r), t) ∈ G. Here, we have
ϵCF (h,Φ) ≤ (1− u)ϵt=1F (h,Φ) + uϵt=0F (h,Φ) +BΦ · IPMG ( pt=1Φ , p t=0 Φ ) . (17)
Theorem 3 (Bounding the ϵPEHE). The Expected Precision in Estimating Heterogeneous Treatment Effect ϵPEHE satisfies
ϵPEHE(h,Φ) ≤ 2 ( ϵCF (h,Φ) + ϵF (h,Φ)− 2σ2Y ) ≤ 2 ( ϵt=0F (h,Φ) + ϵ t=1 F (h,Φ) +BΦIPMG ( pt=1Φ , p t=0 Φ ) − 2σ2Y ) .
(18)
Next we relate the performance of target task ϵTa,t=0F (h,Φ) to that of a source task ϵ Sr,t=0 F (h,Φ). Without loss of generality, we present the proof for the case t = 0.
We make the following assumptions throughout the sequel.
1. Assumption 1: The loss function is non-negative, i.e. ℓTaΦ,h(x, t, y) ≥ 0 for all (x, t, y) ∈ (X × {0, 1} × Y),
2. Assumption 2: Φ is injective (thus Ψ = Φ−1 exists on Im(Φ)) (Shalit et al., 2016),
3. Assumption 3: There exists a real function space G on R = Im(Φ) and a constant BTaΦ such that the function r 7→ 1
BTaΦ · ℓTaΦ,h(Ψ(r), t, y) ∈ G.
4. Assumption 4: Causal Knowledge Transferability Assumption: There exists a function class G′ on Y such that y 7→ lΦ,h(x, t, y) ∈ G′ and IPMG′(P (Y Srt |x), P (Y Tat |x)) ≤ δ for t ∈ {0, 1}.
Proof of Lemma 1
ϵTa,t=0F (Φ, h)− ϵ Sr,t=0 F (Φ, h)
= ∫ X LTaΦ,h(x, 0)P (X Ta 0 = x)− LSrΦ,h(x, 0)P (XSr0 = x)dx
= ∫ X LTaΦ,h(x, 0)P (X Ta 0 = x)− LTaΦ,h(x, 0)P (XSr0 = x) + LTaΦ,h(x, 0)P (XSr0 = x)
− LSrΦ,h(x, 0)P (XSr0 = x)dx
= ∫ X LTaΦ,h(x, 0)P (X
Ta 0 = x)− LTaΦ,h(x, 0)P (XSr0 = x)dx︸ ︷︷ ︸
Γ
+ ∫ X ( LTaΦ,h(x, 0)− LSrΦ,h(x, 0) ) P (XSr0 = x)dx︸ ︷︷ ︸
Θ
We next upper bound Θ and Γ. To bound Θ, we use the following inequality:
LTaΦ,h(x, t)− LSrΦ,h(x, t) = ∫ Y ℓΦ,h(x, t, y) ( P (Y Tat = y|x)− P (Y Srt = y|x) ) dy
≤ maxf∈G′ ∣∣∣∣∣ ∫ Y f(y)P (Y Tat = y|x)− P (Y Srt = y|x)dy ∣∣∣∣∣ = IPMG′ ( P (Y Tat = y|x), P (Y Srt = y|x) ) ≤ δ
With the above inequality:
Θ = ∫ X ( LTaΦ,h(x, 0)− LSrΦ,h(x, 0) ) P (XSr0 = x)dx
≤ ∫ X δP (XSr0 = x)dx = δ ∫ X P (XSr0 = x)dx = δ
To bound Γ, we use the change of variable formula
Γ = ∫ R LTaΦ,h(x, 0)P (X Ta 0 = x)− LTaΦ,h(x, 0)P (XSr0 = x)dx
= ∫ X LTaΦ,h ( Ψ(r), 0 ) P ( Φ(XTa0 ) = r ) − LTaΦ,h ( Ψ(r), 0 ) P ( Φ(XSr0 ) = r ) dr
≤ BTaΦ ·maxg∈G ∣∣∣∣∣ ∫ R g(r) ( P ( Φ(XTa0 ) = r ) − P ( Φ(XSr0 ) = r )) dr ∣∣∣∣∣ = BTaΦ · IPMG ( P ( Φ(XTa0 ) ) , P ( Φ(XSr0
)) Combining the above upper bounds for Γ and Θ, we have
ϵTa,t=0F (Φ, h)− ϵ Sr,t=0 F (Φ, h) ≤ B Ta Φ · IPMG ( P ( Φ(XTa0 ) ) , P ( Φ(XSr0 ) )) + δ.
We conclude that
ϵTa,t=0F (Φ, h) ≤ ϵ Sr,t=0 F (Φ, h) +B Ta Φ · IPMG ( P ( Φ(XTa0 ) ) , P ( Φ(XSr0 ) )) + δ.
This concludes the proof.
Proof of Lemma 2 We apply Theorem 2 to establish an upper bound for the counterfactual loss of the target task and subsequently apply Lemma 1 .
ϵTaCF (h,Φ) ≤ ϵ Ta,t=1 F (h,Φ) + ϵ Ta,t=0 F (h,Φ) +B Ta Φ IPMG ( Φ(XTa0 ),Φ(X Ta 1 ) ) Therefore,
ϵTaCF (h,Φ) ≤ ϵ Sr,t=1 F (Φ, h) + ϵ Sr,t=0 F (Φ Sr, hS) + 2δ +BTΦSr · IPMG ( P ( Φ(XTa1 ) ) , P ( Φ(XSr1 ) )) +BTΦSr · IPMG ( P ( Φ(XTa0 ) ) , P ( Φ(XSr0 ) )) +BTΦSr · IPMG ( P ( ΦS(XTa0 ) ) , P ( ΦSr(XTa1 )
)) This concludes the proof.
Proof of Theorem 1 By applying Theorem 3, we get ϵTaPEHE(h,Φ) ≤ 2 ( ϵTa,t=0F (h,Φ) + ϵ Ta,t=1 F (h,Φ) +B Ta Φ IPMG ( P (Φ(XTa0 )), P (Φ(X Ta 1 )) )) After applying Lemma 1 to the first and second terms in the above:
ϵTaPEHE(Φ S , hS) ≤ 2 ( ϵSr,t=0F (Φ, h) +B Ta Φ · IPMG ( P (XTa0 ), P (X Sr 0 ) ) + δ + ϵSr,t=1F (Φ, h)
+BTaΦ · IPMG ( P (XTa1 ), P (X Sr 1 ) ) + δ + IPMG ( P ( ΦS(XT0 ) ) , P ( ΦS(XT1 ) )) Hence,
ϵTaPEHE(Φ S , hS) ≤ 2 ( ϵS,t=1F (Φ S , hS) + ϵS,t=0F (Φ S , hS) +BTΦSr · IPMG ( P ( Φ(XTa1 ) ) , P ( Φ(XSr1 ) )) +BTΦSr · IPMG ( P ( Φ(XTa0 ) ) , P ( Φ(XSr0 ) )) +BTΦS · IPMG ( P ( ΦS(XT0 ) ) , P ( ΦS(XT1 ) )) + 2δ ) .
This concludes the proof.
8.6 BASELINE: DATA BUNDLING
8.6.1 TRANSFER LEARNING SCENARIO: DIFFERENT POTENTIAL OUTCOMES
In many causal inference scenarios, we only have access to the trained model and the corresponding data is not available. For instance, in medical applications, this could be the case due to privacy reason. Consequently, bundling the datasets of source tasks is not feasible. In contrast, for some specific applications, the data may be available. In this case, we create another baseline referred to as data bundling.
In data bundling, we create the bundled dataset by combining the datasets of source tasks and the dataset of the target task. Below, we compare our approach with data bundling for the IHDP and the Movement(Physics) datasets. For data bunding, we report the model best performance, i.e. εPEHE , achieved by hyperparameter search. For our approach, we only report the performance of the model with the lowest training error. This gives more advantage to data bundling baseline. The results are summarized in Figure 5. Even with the aforementioned advantage, the data bundling method has poor performance. This may be due to data imbalance, lack of precision in determining similarity from propensity score, and differences in outcome functions.
8.6.2 SAME POTENTIAL OUTCOMES, DIFFERENT PROPENSITY SCORES
We compare the effectiveness of data bundling with that of transfer learning in scenarios where only the propensity scores are changing across tasks (i.e., same potential outcome functions). Figure 6 provides a summary of our findings. In this experiment, we generate synthetic source and target datasets, each having 1000 data points. For the source dataset, we generate the treatment XSr1 and the control XSr0 populations respectively from two Gaussian distributions N ((0, 0), I2) and N ((5, 5), 2·I2). Subsequently, we build the target dataset by adding noise to the source task samples. Specifically, we add standard Gaussian noise to the ith sample of the source task in order to generate the ith sample of the target task, i.e., xTai = x Sr i + ϵi where x Ta i and x Sr i are respectively the i th samples of the target task and the source task, and ϵi ∼ N ((0, 0), I2) is the additive noise. For every sample i ∈ {1, .., 1000}, we assign the treatment labels as follows:
• If the ith sample of the source task is in the treatment group, then the corresponding ith sample of the target task is also in the treatment group.
• If the ith sample of the source task is in the control group, then the corresponding ith sample of the target task is in the treatment group with probability p = 0.1, and in the control group with p = 0.9.
Subsequently, the output is defined based on the potential outcome functions f0 and f1 as follows:
f0(x) = 0.5× e(−0.005B T 0 x),
and f1(x) = 0.5× e(−0.03B T 1 x) + 5,
where B0, B1 ∈ R2 and their components are respectively sampled from N (0, 1) and N (4, 1). Please note that these are only sampled once, and these parameters are shared between the source and the target tasks. Our experiments (see Figure 6) suggest that even when only the propensity scores are different, transfer learning has better performance than bundling the source and the target datasets together.
8.7 TASK DISTANCE
Let PNθ (T , Dte) ∈ [0, 1] be a function that measures the performance of a given model Nθ parameterized by θ ∈ Rd on the test set Dte of the causal task T .
Definition 7 (ε-approximation Network). A model Nθ is called an ε-approximation network for a task-dataset pair (T , D) if it is trained using the training data Dtr such that PNθ (T , Dte) ≥ 1− ε, for a given 0 < ε < 1.
8.7.1 COMPARISON BETWEEN UNSYMMETRIZED AND SYMMETRIZED TASK DISTANCE
We compare the unsymmetrized and symmetrized task distances on the Jobs and the Twins dataset. Figure 7 shows that the proposed symmetrized task distance has successfully captured the symmetries within causal inference tasks. p (on the x-axis) denotes the probability of flipping treatment assignments of the original dataset. The altered datasets with p = 1 (i.e., the flipped dataset) and p = 0 (i.e., the original dataset) are the closest task to the original task (as expected). The altered dataset with p = 0.5 is the furthest dataset (as expected). Thus, the trend of the points is expected to resemble an inverted ’U’. We observe that the symmetrized task distance exhibits this trend. In contrast, the unsymmetrized task distance (in the right figures) fails to demonstrate this trend.
8.7.2 TASK DISTANCE BETWEEN COUNTERFACTUAL TASKS
In the following section, we denote the pair a = (Ta, Da) by aF = (TaF , DaF ) (respectively aCF = (TaCF , DaCF )) whenever Da is sampled from the factual (respectively counterfactual) distribution. We refer to (TaF , DaF ) and (TaCF , DaCF ) as the corresponding factual and counterfactual tasks. The following theorem proves that the order of proximity of tasks is preserved even if we go to a parallel universe where we observe the counterfactual tasks instead. In other words, a task, which is more similar to the target task when measured using factual data, remains more similar to the target task even when measured using counterfactual data. Theorem 4. Let T be the set of tasks and let aF = (TaF , DaF ), bF = (TbF , DbF ), and cF = (TcF , DcF ) be three factual tasks and aCF = (TaCF , DaCF ), bCF = (TbCF , DbCF ), and cCF = (TcCF , DcCF ) their corresponding counterfactual tasks. Suppose that there exists a class of neural networks N = {Nθ}θ∈Θ for which:
∀a, b, c ∈ T, s[a, b] ≤ s[a, c] + s[c, b] (19)
and the TAS between the factual and the counterfactual can be arbitrarily small
∀ϵ > 0,∃Nθ ∈ N , s[aF , aCF ] < ϵ (20)
Then we have the following result,
s[aF , bF ] ≤ s[aF , cF ] =⇒ s[aCF , bCF ] ≤ s[aCF , cCF ] (21)
Proof of Theorem 4 Suppose that s[aF , bF ] ≤ s[aF , cF ]. Then for every ϵ > 0 we have,
s[aCF , bCF ] ≤ s[aCF , aF ] + s[aF , bF ] + s[bF , bCF ] ≤ ϵ+ s[aF , cF ] + ϵ ≤ s[aF , aCF ] + s[aCF , cCF ] + s[cF , cCF ] + 2ϵ ≤ s[aCF , cCF ] + 4ϵ
This is true for every ϵ > 0, therefore s[aCF , bCF ] ≤ s[aCF , cCF ]. This concludes the proof. | 1. What is the focus and contribution of the paper regarding conditional average treatment effects?
2. What are the strengths of the proposed approach, particularly in combining representation learning and transfer learning algorithms?
3. What are the weaknesses of the paper, especially regarding the experimental design and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any concerns or questions regarding the theoretical analysis, such as the label-invariant task affinity construction and the use of the Fisher Information Matrix? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper combines ideas from representation learning for the estimation of conditional average treatment effects (CATEs) with tools for analysing and designing transfer learning algorithms. Theory that builds on previous results is presented, along with numerical experiments.
Strengths And Weaknesses
Knowing how to leverage data from different distributions is always relevant when sample sizes are limited, and given the challenges of causal inference, it is important to tap into existing ideas such as transfer learning.
The combination of ideas of task-affinity measures and representation learning for CATE is a fruitful direction.
However, there doesn't seem that much work was necessary to provide a direct combination. Also, there are so many other off-the-shelf ideas that could be exploited. For instance, when only the propensity score is changing among tasks, we could literally just bundle all data together in one big regression. Maybe it's not the optimal thing to do, but surely a baseline to consider.
Clarity, Quality, Novelty And Reproducibility
I had some difficulties appreciating the label-invariant task affinity construction. In fact, it wasn't clear to me what the different tasks amount to. From Appendix 7.1, it looks like that a task amounts to a different propensity score or outcome model in the experiments? As a matter of fact, I can't understand some of the datasets. What is the (binary?) treatment in "Movement (Physics)", and what are m and k? Why are the "potential outcome functions" in "Twins" changing?
I'm not sure why the permutation used in distance metric in 4.1 is particularly relevant. In any case, I have issues with (4) and (5) themselves.
It is mentioned in Definition 4 that the Fisher Information Matrix is asymptotically well-defined (under some conditions?), which I take that it means being positive definite (which is a standard result). I'm not sure whether (4) is p.d. for finite data without the sample size being at least as large as the number of parameters. Also, given that neural nets are not identifiable (at the very least due to permutations, but in general by having the ability of achieving zero training error if large enough), I'm not sure how meaningful (5) is. |
ICLR | Title
Causal Knowledge Transfer from Task Affinity
Abstract
Recent developments in deep representation models through counterfactual balancing have led to a promising framework for estimating Individual Treatment Effects (ITEs). While Randomized Control Trials are vital to the understanding of causal effects, they are sometimes infeasible, costly, or unethical to conduct. Here, we focus on transferring the causal knowledge acquired in prior experiments to new scenarios where only limited data is available. We first provide regret bounds on the counterfactual loss and ITE error of the target task indicating the transferability of causal knowledge. We also observe that the absolute values of ITEs are invariant under the action of the symmetric group on the labels of treatments. Given this invariance, we propose a symmetrized task distance for calculating the similarity of a target scenario with those encountered before. The aforementioned task distance is then used to transfer causal knowledge from the closest of all the available previously learned tasks to the target scenario. Empirical studies are provided for various datasets demonstrating that the proposed symmetrized task distance is strongly related to the estimation of the counterfactual loss. Our results indicate that transferring causal knowledge reduces the amount of required data by up to 95% when compared to training from scratch.
N/A
Recent developments in deep representation models through counterfactual balancing have led to a promising framework for estimating Individual Treatment Effects (ITEs). While Randomized Control Trials are vital to the understanding of causal effects, they are sometimes infeasible, costly, or unethical to conduct. Here, we focus on transferring the causal knowledge acquired in prior experiments to new scenarios where only limited data is available. We first provide regret bounds on the counterfactual loss and ITE error of the target task indicating the transferability of causal knowledge. We also observe that the absolute values of ITEs are invariant under the action of the symmetric group on the labels of treatments. Given this invariance, we propose a symmetrized task distance for calculating the similarity of a target scenario with those encountered before. The aforementioned task distance is then used to transfer causal knowledge from the closest of all the available previously learned tasks to the target scenario. Empirical studies are provided for various datasets demonstrating that the proposed symmetrized task distance is strongly related to the estimation of the counterfactual loss. Our results indicate that transferring causal knowledge reduces the amount of required data by up to 95% when compared to training from scratch.
1 INTRODUCTION
One of the most remarkable characteristics of humans is their ability to transfer causal knowledge learned in a scenario to other similar situations. It is highly desirable for neural networks to have the same ability because of their numerous potential applications. For instance, mutations of old viruses often necessitate the development of new vaccines for treatment. To study the effect of new vaccine candidates, researchers need to collect data from randomized control trials, which is timeconsuming and expensive (Kaur & Gupta, 2020). If the mutated viruses can be related to old ones by a measure of similarity, then the effects of vaccine candidates can be quickly calculated based on this similarity with a small amount of data collected for the new scenario. In other words, transfer learning methods can help the research on the effects of various treatments (e.g., applications in medicines, personal training, social policy) progress much faster (Ebbehoj et al., 2022).
Recently, there has been significant progress in transfer learning, especially in computer vision and natural language processing applications (Wang & Deng, 2018; Alyafeai et al., 2020; Pan & Yang, 2010; Zhuang et al., 2021). While this is very promising, a challenge for transferring causal knowledge arises from statistical learning models’ vulnerability to non-causal correlations. For example, camels and horses often exist in images with different background colors, and a classifier may learn to use these colors to classify these objects (Arjovsky et al., 2019; Geirhos et al., 2019; Beery et al., 2018). A more critical challenge for transferring causal knowledge is that, in practice, the performance of the trained model for estimating ITEs can never be computed. This is because counterfactual data can never be collected as shown in Figure 1. This problem is known in the literature as the fundamental problem of causal inference (Rubin, 1974) and (Holland, 1986). For example, to compute the effect of vaccination on an individual at some given time, she/he must be both vaccinated and not be given the vaccine, which is obviously impossible. This contrasts with conventional supervised learning problems, where practitioners often use a separate validation set to estimate the true accuracy.
The aforementioned challenges imply that much attention must be paid to selecting the appropriate source model to transfer from in causal knowledge transfer. Additionally, the similarity of scenarios must be calculated using a distance related to variations of counterfactual loss between scenarios. This motivates our work in this paper, where we propose a task distance between causal inference scenarios. The task distance is then used for transferring causal knowledge, as shown in Figure 2. Our contributions can be summarized as follows:
1. For causal transfer learning scenarios, we establish new (to the best of our knowledge) regret bounds for the learning of counterfactual outcomes and ITEs for target tasks. These bounds prove the feasibility of transferring causal knowledge.
2. We observe a special property (symmetry) of causal inference tasks. Specifically, the absolute value of ITEs must be invariant to relabeling the treatment groups under the action of the symmetric group. Subsequently, we propose an intuitively appealing symmetrized Fisher task distance for which this property holds. While we construct the proposed task distance to satisfy this property mathematically, we also provide empirical evidence that it successfully lends itself to this symmetry in Section 5.3.
3. We provide both theoretical (e.g., Theorem 4) and empirical evidence (e.g., Figure 3) supporting the relevance of the symmetrized Fisher task distance to transferring causal knowledge. Through extensive experiments, we demonstrate that the proposed task affinity is highly correlated with the loss in estimating counterfactuals (not measurable in practice).
4. We present a representative set of causal inference datasets suitable for studying causal knowledge transfer. Some of these are well-established datasets in the literature, while others are derived from known causal relations in social sciences, physics, and mathematics.
5. We provide empirical evidence based on the above datasets that our methods can compute the ITEs for the target task with significantly fewer (up to 95% reduction) data points compared to the case where transfer learning is not performed.
2 MATHEMATICAL BACKGROUND
We first establish the notation and briefly review the required mathematical background.
2.1 CAUSAL INFERENCE
Let X ∈ X ⊂ Rd be the covariates (features), T ∈ {0, . . . ,M} be the treatment, and Y ∈ Y ⊂ R be the factual (observed) outcome. For every j ∈ {0, . . . ,M} we define Yj to be the Potential Outcome that would have been observed if only treatment T = j, j ∈ {0, 1, · · · ,M} was assigned. For example, in the medical context, X is the individual information (e.g. weight, heart rate, etc), T is the treatment assignment (e.g., t = 0 when the individual didn’t receive a vaccine, and t = 1 where he/she did), Y is the outcome (e.g mortality data). A causal inference dataset is given by a set of factual observations DF = {(xi, ti), yi}Ni=1, where N is the number of samples. We present our results for M = 1 (binary case) in the sequel. However, our approach immediately applies to any positive integer M < ∞. In the binary case, the individuals who received t = 0 (respectively t=1) are denoted by the control group (respectively the treatment group). Definition 1 (ITE). The Individual Treatment Effect also referred to as the Conditional Average Treatment Effect (CATE) (Imbens & Rubin, 2015), is defined as:
∀x ∈ X , τ(x) = E[Y1 − Y0|X = x] (1)
We assume that our data generation process respects overlap (i.e. 0 < p(t = 1|x) < 1 for all x ∈ X ) and conditional unconfoundedness (i.e. (Y 1, Y 0) ⊥⊥ T |X) (Robins, 1987). These assumptions are sufficient conditions for the ITE to be identifiable (Imbens, 2004). We also assume that a true underlying function f(x, t) describes the causal relationship. By definition τ(x) = f(x, 1)−f(x, 0). Let f̂(x, t) denote a hypothesis that estimates the true function f(x, t). Thus, the ITE function can then be estimated as τ̂(x) = f̂(x, 0)−f̂(x, 0). We let lf̂ (x, t, y) denote a loss function that quantifies the performance of f̂(·, ·). A possible example is lf̂ (x, t, y) = (y − f̂(x, t)) 2 (L2 loss) .
Definition 2 (Factual Loss). For a hypothesis f̂ and a corresponding loss function lf̂ we define the factual and counterfactual losses respectively as
ϵF (f̂) = ∫ X×{0,1}×Y lf̂ (x, t, y) p(x, t, y)dxdtdy (2)
We also define the factual loss for the treatment (t = 1) and control (t = 0) groups respectively as:
ϵt=1F (f̂) = ∫ X×Y lf̂ (x, 1, y) p(x, y|t = 1)dxdy (3)
and ϵt=0F (f̂) = ∫ X×Y lf̂ (x, 0, y) p(x, y|t = 0)dxdy (4)
Definition 3 (Counterfactual Loss). The counterfactual loss is defined as (Shalit et al., 2016)
ϵCF (f̂) = ∫ X×{0,1}×Y lf̂ (x, t, y) p(x, 1− t, y)dxdtdy (5)
Intuitively, the counterfactual loss corresponds to the expected loss value in a parallel universe where the roles of the control and treatment groups are exchanged. Definition 4. We define the Expected Precision in Estimating Heterogeneous Treatment Effect (PEHE) (Hill, 2011) as
εPEHE(f̂) = ∫ X (τ̂(x)− τ(x))2 p(x)dx. (6)
The value εPEHE is often used as the performance metric for estimation of ITEs as in (Shalit et al., 2016),(Hill, 2011), and (Johansson et al., 2016). Small factual and counterfactual losses are sufficient conditions for causal models to have good performance (i.e., low εPEHE) (Shalit et al.,
2016). Intuitively, this measures if a model has a good performance in predicting the effect both when the treatment is administered or not. Lower εPEHE also implies that the model is good for predicting the ITEs. We note that the above measures of performance are not directly accessible in causal inference scenarios, because the calculation of the ground truth ITE values requires access to counterfactual values. In this light, we may resort to selecting a hypothesis that optimizes an upper bound instead, such as the one given in the following section (see Equation 8).
2.2 TARNET AND COUNTERFACTUAL REGRESSION
TARNet (Shalit et al., 2016) has proven to be a successful framework for counterfactual balancing to estimate ITEs. It is defined as a pair of functions (Φ, h) where Φ : Rd → Rl is a representation function of the features and h : Rl × {0, 1} → R is a function learning the two potential outcomes functions in the representation space. The hypothesis learning the true causal function is: f̂(x, t) = h(Φ(x), t). We denote the loss function lf̂ by l(Φ,h). TARNet uses integral probability metric (IPM) defined as
IPMG(p, q) := sup g∈G ∣∣∣∣∫ S g(s)(p(s)− q(s))ds ∣∣∣∣ , (7) where the supremum is taken over a given class of functions G to measure the distance between distributions. It is a consequence of Kantorovich-Rubinstein duality Villani (2009) that IPM reduces to 1-Wassertein distance when G is the set of 1-Lipschtiz functions as is the case in our numerical experiments.
TARNet (Shalit et al., 2016) estimates the counterfactual outcomes by minimizing:
L(Φ, h) = 1 N N∑ i=1 wi · l(Φ,h)(xi, ti, yi) + α · IPMG ( {Φ (xi)}i:ti=0 , {Φ (xi)}i:ti=1 ) (8)
where wi = ti2u + 1−ti 2(1−u) , and u = 1 N ∑N i=1 ti. The parameter α is referred to as the balancing weight since it controls the trade-off between the similarity of the representations in the latent domain, and the performance of the model on the factual data.
3 TRANSERABILITY OF CAUSAL KNOWLEDGE
In this section, we use superscripts Ta and Sr to denote quantities related to target and source task respectively. Suppose that we have a model (ΦSr, hSr) trained on a source causal inference task. We apply the source model to a different target task. For notational simplicity, we denote P (Φ(X)|T = t) by P (Φ(Xt)) for t ∈ {0, 1}. We are interested in the performance of a welltrained source model when applied to a target task, i.e.
ϵTaPEHE(Φ Sr, hSr) = ∫ x∈X ( τTa(x)− [hSr(ΦSr(x), 1)− hSr(ΦSr(x), 0)] )2 P (XTa = x)dx
where τTa is the individual treatment effect function of the target, Φ is the representation learning function, and h is the potential outcomes hypothesis. While it is difficult to estimate, this error can have an upper bound that only involves obtainable quantities if we make reasonable assumptions about the relationship between the source and target task (defined in the Assumption 4 below). We make the following assumptions throughout this section:
1. Assumption 1: The loss function is non-negative, i.e. ℓTaΦ,h(x, t, y) ≥ 0 for all (x, t, y) ∈ (X × {0, 1} × Y),
2. Assumption 2: Φ is injective (thus Ψ = Φ−1 exists on Im(Φ)) (We borrow this assumption from (Shalit et al., 2016)),
3. Assumption 3: There exists a real function space G on R = Im(Φ) and a constant BTaΦ such that the function r 7→ 1
BTaΦ · ℓTaΦ,h(Ψ(r), t, y) ∈ G.
4. Assumption 4: Causal Knowledge Transferability Assumption: There exists a function class G′ on Y such that y 7→ lΦ,h(x, t, y) ∈ G′ and IPMG′(P (Y Srt |x), P (Y Tat |x)) ≤ δ for t ∈ {0, 1}.
Note that the causal knowledge transferability assumption implies that the outcome distributions (causal effects) of treatment t in source and target tasks need to be similar in order for transfer learning to be beneficial.
Our main Theorem guarantees that causal knowledge can be transferred and is proved using two Lemmas that are stated below. These lemmas provide upper bounds on the factual and counterfactual losses for transferring causal knowledge and may be by themselves of independent interest. The proofs of these Lemmas and that of the Theorem are provided in the Appendix 8.5. Lemma 1. (Factual Loss of Source Model on Target Task) Suppose that Assumptions 1-4 hold. The factual losses of any model (Φ, h) on source and target task satisfy:
∀t ∈ {0, 1}, ϵTa,tF (Φ, h) ≤ ϵ Sr,t F (Φ, h) +B Ta Φ · IPMG(P (Φ(XTat )), P (Φ(XSrt ))) + δ
Lemma 2. (Counterfactual Loss of Source Model on Target Task) Suppose that Assumptions 1-4 hold. The counterfactual losses of any model (Φ, h) on source and target task satisfy:
ϵTaCF (Φ, h) ≤ϵ Sr,t=1 F (Φ, h) + ϵ Sr,t=0 F (Φ, h S)
+BTaΦ · IPMG(P (Φ(XTa1 )), P (Φ(XSr1 ))) +BTaΦ · IPMG(P (Φ(XTa0 )), P (Φ(XSr0 ))) +BTaΦ · IPMG(P (Φ(XTa0 )), P (Φ(XTa1 ))) + 2δ
The above lemmas quantify the relationship between causality and transfer learning. In particular Lemma 2 bounds the inherently non-observable counterfactual loss by tractable quantities. Theorem 1. (Transferability of Causal Knowledge) Suppose that Assumptions 1-4 hold. The performance of source model on target task, i.e. ϵTaPEHE(Φ Sr, hSr), is upper bounded by:
ϵTaPEHE(Φ Sr, hSr) ≤2(ϵSr,t=1F (Φ Sr, hSr) + ϵSr,t=0F (Φ Sr, hSr)
+BTaΦSr · IPMG(P (Φ Sr(XTa1 )), P (Φ Sr(XSr1 )))
+BTaΦSr · IPMG(P (Φ Sr(XTa0 )), P (Φ Sr(XSr0 )))
+BTaΦSr · IPMG(P (Φ Sr(XTa0 )), P (Φ Sr(XTa1 )) + 2δ)
Theorem 1 implies that good performance on the target task is guaranteed if (1) the source model has a small factual loss (e.g., the first and second term in the upper bound) and (2) the distributions of the control and the treatment group features are similar in the latent domain (the rest three terms in the upper bound). This upper bound provides us with a sufficient condition for transfer learning in causal inference scenarios, indicating the transferability of causal knowledge. Please note that these regret bounds can be applied to any transfer learning framework that involves a pair of tasks.
4 SYMMETRIZED TASK AFFINITY FOR CAUSAL INFERENCE TASKS
While these regret bounds indicate the transferability of causal knowledge between any pair of causal inference tasks, they don’t provide a constructive way to choose the best source task to transfer from, when multiple source tasks exist. The order of performance of different models and that of their upper bounds are not necessarily the same. Hence, we propose a label-invariant task affinity that finds the closest source task. Moreover, this task affinity satisfies the symmetry property (see section 4.3) of causal inference tasks. Our new task affinity is built on the Fisher task distance (FTD). We first give a brief introduction to FTD, then we propose a symmetrized Fisher task distance for causal inference tasks.
4.1 TASK REPRESENTATION
The ordered pair of a causal task T and its dataset D = (X,T ) will be denoted by (T , D), where dataset D itself consists of pair of covariates and their assigned treatments.
We will mathematically formalize a sufficiently well-trained deep network representing a causal task-dataset pair (T , D) in the Appendix 8.7. From now on, we assume that all the previously trained models are sufficiently well-trained networks.
4.2 FISHER TASK DISTANCE
Here, we recall the definition of the Fisher Information matrix for a neural network, and well-defined Fisher task distance (Achille et al., 2019; Le et al., 2021b; 2022b). Definition 5 (Fisher Information Matrix). For a neural network Nθa with weights θa trained on data Da, a given test dataset Db and the negative log-likelihood loss function L(θ,D), the Fisher Information matrix is defined as:
Fa,b = ED∼Db [ ∇θL(θa, D)∇θL(θa, D)T ] = −ED∼Db [ H ( L(θa, D) )] , (9)
where H is the Hessian matrix, i.e., H ( L(θ,D) ) = ∇2θL(θ,D), and expectation is taken w.r.t the data. It can proved that Fisher Information Matrix is asymptotically well-defined (Le et al., 2022b). In practice, we approximate the above with the empirical Fisher Information matrix:
F̂a,b = 1 |Db| ∑ d∈Db ∇θL(θa, d)∇θL(θa, d)T . (10)
Here, the empirical Fisher Information Matrix is positive semi-definite because it is the summation of positive semi-definite terms, regardless of the number of samples. For completeness, we next review the task affinity score (Le et al., 2021b). Definition 6 (Task Affinity Score (TAS)). Let (Ta, Da) and (Tb, Db) respectively denote the source and target task-dataset pairs. Let Da = Dtra ∪ Dtea (respectively Db = Dtrb ∪ Dteb ) with Dtra (respectively Dtrb ) and D te a (respectively D te b ) be the training and test sets of dataset Da (respectively Db), where the training for Ta is performed using the source representation network Nθa . Consider the Fisher information matrix H ( L(θ,Da) ) of Nθa with test data D te a . Let Fa,a be the diagonal
matrix of absolute values of elements of major diagonal of H ( L(θ,Da) ) normalized to have unit trace. Let Fa,b be constructed in an analogous manner but using the training data Dtrb (instead of Dtea ). The TAS from the source task Ta to the target task Tb is defined as:
s[a, b] = 1√ 2 ∥∥∥F 1/2a,a − F 1/2a,b ∥∥∥ F
(11)
It can be proved that 0 ≤ TAS ≤ 1 where TAS = 0 denotes extreme similarity and TAS = 1 indicates extreme disimilarity. In Appendix 8.5, we prove under stringent assumptions that the order of TAS between candidate source tasks and the target task are preserved when a parallel universe experiment is performed in which the roles of the control and treatment groups are exchanged.
4.3 LABEL-INVARIANT TASK AFFINITY
Symmetry Property of Causal Inference Tasks Causal inference tasks can be considered as having multiple regression problems, one for each treatment group. Given a source task, if we alternate the treatment labels (i.e., 0 to 1 and 1 to 0), the treatment effect (i.e., E[Y1−Y0|X]) will be negated. Consequently, the unsymmetrized task distance (Le et al., 2022b) between the original task and the permuted task can be very large. However, the original model does not need to be retrained for transfer learning as we only need to permute the roles of output layers of the model to predict the individual treatment effects correctly for each group. In other words, the causal distance between these two permuted tasks must be zero. The following proposed label-invariant task affinity lends itself to this property of causal inference tasks.
Our causal inference tasks are represented by TARNet type networks. We also restrict to the case, where all causal tasks under consideration have the same number of treatment labels M (e.g., M = 2). Let (Ta, Da) (respectively (Tb, Db)) with Da = (Xa, Ta, Ya) (respectively Db = (Xb, Tb, Yb)) be the source (respectively target) causal inference tasks. Clearly Ta, Tb ∈ {0, 1, . . . ,M}. Consider the symmetric group SM+1 consisting of all permutations of labels {0, 1, . . . ,M}. For σ ∈ SM+1, let Tσ(b) denote the permutation of the target treatment labels under the action of σ. Let dσ =
1√ 2 ∥∥∥F 1/2a,a − F 1/2a,σ(b)∥∥∥ F , then
ssym[a, b] = min σ∈SM+1 (dσ)
is the label-invariant task affinity distance between causal tasks Ta and Tb (The pseudocode is provided in the Appendix 8.3). It follows from the above definition that the order of closeness of tasks under label-invariant task affinity closeness is robust to the architectural choice of the representation networks since task affinity distance has been shown to enjoy this property (Le et al., 2022b).
5 EXPERIMENTAL RESULTS
We first describe the datasets we have used for our empirical studies. Subsequently, we present empirical results about quantifying the gains of transfer learning for causal inference, demonstrating the strong correlation between the proposed task distance and the counterfactual loss, and showing that the proposed task distance identifies the symmetries within causal inference tasks.
5.1 CAUSAL INFERENCE DATASETS
We present a representative family of causal inference datasets suitable for studying causal knowledge transfer. Some of these are well-established datasets in the literature, while others are motivated by known causal structures in diverse areas such as social sciences, physics, health, and mathematics. Table 1 provides a brief description of the datasets used in our studies. A more detailed description is provided in Appendix 8.4.1. For each dataset, a number of corresponding causal inference tasks exist, which can be used to study transfer learning scenarios. Please note that we can only access the counterfactual data of the synthetic/semi-synthetic datasets (i.e., IHDP, RKHS, Movement, Heat). We are not in possession of the counterfactual data of the real-world datasets (i.e., Twins, Jobs).
5.2 COMPARISON OF PERFORMANCE WITH/WITHOUT TRANSFER LEARNING
Here we briefly discuss our experiments quantifying the impact of transferring causal knowledge on the size of required training data. In this experiment, we use Heat (Physics), Movement (Physics), IHDP, and RKHS datasets for which the counterfactual outcomes are available. We first fix a target causal inference task. For a wide range of balancing weights (α), we record the values of εPEHE for the training of the model from scratch while increasing the size of training datasets (at the end training process). In this process, the training datasets are slowly expanded such that smaller training sets are subsets of larger ones. We then report the minimum εPEHE achieved for each dataset size. For the Target task, we identify the closest source task and repeat the above process with a small amount of target task data. We then compare the performance with and without transfer learning to quantify the amount of data needed by transfer learning models to achieve the best possible performance without transferring causal knowledge. The results are summarized in Table 2, which demonstrates that transferring causal knowledge decreases the required amount of training data in this setting by a percentage between 75% and 95%.
5.3 TASK DISTANCE AND COUNTERFACTUAL LOSS
Here, we show empirically the strong correlation between task distance (which only uses available data) and counterfactual loss (which is impossible to measure perfectly except for synthetic datasets). We show in Figure 3 that for different balancing weights α (see Equation 8), the correlation between
task distance and counterfactual error on the IHDP, RKHS, Movement(Physics), and Heat(Physics) datasets for which counterfactuals are known. It is intuitively appealing and empirically observed that the task distance and the counterfactual loss have a strong correlation: the model of a source task has a smaller counterfactual loss on the target data if the target task is closer (in terms of the proposed task distance). Note that the points in Figure 3for different values of α (i.e., balancing weight) are extremely close. This shows that the proposed task affinity not only indicates counterfactual loss, but also is robust to change of hyper-parameters. This is a highly desirable property, especially in causal inference scenarios where no validation data can be accessed to cross-validate the hyper-parameters. Our numerical results for the Jobs and Twins datasets verify that the proposed task distance can capture the symmetries within causal inference problems. We flip treatment labels (0 and 1) with probability p (without any changes to the features and the outcomes) independently for each control and treatment data point. In Figure 4, we depict the trend of the symmetrized task distance between the original and the altered dataset by varying p, p ∈ [0, 1]. The symmetry of task distance is evident (with some deviation due to limited training data for calculating the task distance). The altered dataset with p = 1 is the closest to the original dataset (as it should be) since we have completely flipped the treatment assignments. The altered dataset with p = 0.5 is the furthest (as it should be), since we have randomly shuffled the control and the treatment groups. For all datasets, it can also be observed that the task distance trends are robust to variations in the balancing weight.
6 RELATED WORK
In the setting of transfer learning (Pan & Yang, 2010; Zhuang et al., 2021), prior learned models are used to increase the learning efficiency and decrease the required data. For instance, the parameters from a trained model may be used as initialization values for the target task. Many approaches in transfer learning (Thrun & Pratt, 1998; Blum & Mitchell, 1998; Silver & Bennett, 2008; Razavian et al., 2014; Finn et al., 2016; Fernando et al., 2017; Rusu et al., 2016) have been proposed, analyzed and applied in various machine learning applications. Transfer learning techniques inherently assume that prior knowledge in the selected source model helps learn a target task. In other words, these methods often do not consider the selection of the base task to perform knowledge transfer. Consequently, in some rare cases, transfer learning may even degrade the performance of
the model Standley et al. (2020). In order to avoid potential performance loss during knowledge transfer to a target task, task affinity (or task similarity) is considered as a selection method that identifies a group of closest base candidates from the set of the prior learned tasks. Task affinity has been recently investigated and applied to various domains, such as transfer learning (Zamir et al., 2018; Dwivedi & Roig, 2019; Achille et al., 2019; Wang et al., 2019), neural architecture search (Le et al., 2021a; 2022a; Le et al., 2021), few-shot learning (Pal & Balasubramanian, 2019; Le et al., 2022b), multi-task learning (Standley et al., 2020), and continual learning (Kirkpatrick et al., 2017; Chen et al., 2018). The related prior learned tasks are identified with similarity measures and then employed for knowledge transfer. Task affinity is inherently a non-commutative measure as it may be straightforward to transfer the knowledge from a more comprehensive task to a simpler task than the other way around (Le et al., 2021b).
While transfer learning and task affinity have been investigated in numerous application areas, their applications to causal inference have not been fully developed. Neyman-Rubin Causal Model and Pearl’s Do-calculus are two popular frameworks for causal studies based on different perspectives. A central question in these frameworks is determining conditions for identifiability of causal quantities such as Average and Individual Treatment Effects. Past work considered estimators for Average Treatment Effect based on various methods such as Covariate Adjustment (a.k.a back-door adjustment) (Pearl, 2009; Rubin, 1978), weighting methods such as those utilizing propensity scores (Rosenbaum & Rubin, 1983), and Doubly Robust estimators (Funk et al., 2011). With the emergence of Machine Learning (ML) techniques, more recent approaches to causal inference include the applications of decision trees(Wager & Athey, 2015), Gaussian Processes (Alaa & van der Schaar, 2017) and Generative Modeling (Yoon et al., 2018) to ITE estimation. In particular, deep neural networks have successfully learned ITEs and estimated counterfactual outcomes by data balancing in the latent domain (Johansson et al., 2016; Shalit et al., 2016). It is important to note that the transportability of causal graphs is another closely related field that has been well-studied in the causality literature (Bareinboim & Pearl, 2021). It studies transferring knowledge of causal relationships in Pearl’s do-calculus framework. In contrast, in this paper we are interested in transferring knowledge of Individual Treatment Effects from a source task to a target task in the Neyman-Rubin framework using representation learning.
7 CONCLUSION
In this paper, we provided theoretical analysis proving the transferability of causal knowledge and proposed a method for causal transfer learning based on a task affinity framework. To this end, we constructed a new task distance suitable for measuring the similarity of causal inference tasks. Given a new causal inference task, we transferred the causal knowledge from the closest available trained task. Extensive Simulations on a representative family of datasets provide empirical evidence demonstrating the gains of our method and the efficacy of the proposed symmetrized task distance. Reductions as much as 95% in the amount of required training data for new scenarios were observed.
8 APPENDIX
Here, we provide a simple example to help understand causal inference dataset, the pseudocode, the datasets description, the theorems, the proofs for those theorems, and other supplementary materials.
8.1 REPRODUCIBILITY STATEMENT
In the supplementary material, we have included our codes that implement TARNet and the proposed task distance.
8.2 CAUSAL INFERENCE: AN EXAMPLE
Let X ∈ X be the features (e.g., age, sex, etc..), the treatment variable T ∈ {A,B} be the indicator representing if the subject received vaccine A or B, and Y ∈ Y indicates the mortality outcome. The main challenge of causal inference arises from the absence of counterfactual observations. We do not observe the outcomes of individuals upon receiving treatment A if they have received treatment B (and vice versa). The subjects who received vaccine A may be significantly different from those who received treatment B. This is commonly called selection bias (e.g. elderly people are more likely to receive treatment A than young people). Thus estimating the counterfactual effects is challenging due to this unbalance between the treatment groups. Let f̂(x, t) be a hypothesis modeling the outcome for an individual x if he received treatment t. The factual loss is defined as
ϵF (f̂) = ∫ X×{A,B}×Y lf̂ (x, t, y) p(x, t, y)dxdtdy (12)
By Bayes rule, we can write the factual loss as
ϵF (f̂) = ∫ X×Y lf̂ (x,A, y) p(x, y|t = A)p(t = A)dxdy
+ ∫ X×Y lf̂ (x,B, y) p(x, y|t = B)p(t = B)dxdy
= p(t = A) ∫ X×Y lf̂ (x,A, y) p(x, y|t = A)dxdy
+ (1− p(t = A)) ∫ X×Y lf̂ (x,B, y) p(x, y|t = B)dxdy
= p(t = A)ϵt=AF (f̂) + (1− p(t = A)) ϵt=BF (f̂)
Where we define the factual loss for the group who received vaccine A to be
ϵt=AF (f̂) = ∫ X×Y lf̂ (x,A, y) p(x, y|t = A)dxdy (13)
Respectively, the factual loss for the group who received vaccine B is
ϵt=BF (f̂) = ∫ X×Y lf̂ (x,B, y) p(x, y|t = B)dxdy (14)
Let us now consider a parallel universe where the treatment assignments are flipped (those who received vaccine A receive vaccine B and vice versa). The performance of our hypothesis f̂ in this parallel universe is the counterfactual loss, defined as:
ϵCF (f̂) = ∫ X×{A,B}×Y lf̂ (x, t, y) p(x, 1− t, y)dxdtdy (15)
8.3 PESUDOCODE FOR SYMMETRIZED TASK DISTANCE
The pseudocode for our proposed task affinity is given in Algorithm 1.
Algorithm 1: Label-Invariant Task Affinity Score for Causal Inference Data: Source tasks: S = {(X1, T1, Y1), . . . , (Xm, Tm, Ym)}, Target task: (Xt, Tt, Yt) Input: TARNet models Nθ1 , Nθ2 , . . . , Nθm Output: TARNet model for the target task t
1 Function TAS(Xa, Ta, Xb, Tb, Nθa): 2 Compute Fa,a using Nθa with Xa, Ta 3 Compute Fa,b using Nθa with Xb, Tb 4 return s[a, b] = 1√ 2 ∥∥∥F 1/2a,a − F 1/2a,b ∥∥∥ F
5 Function Main: ▷ Find the closest tasks in S 6 for i = 1, 2, . . . ,m do 7 Train Nθi for source task i using (Xi, Ti, Yi) 8 Compute the distance from source task i to target task t: 9 s+i = TAS(Xi, Ti, Xt, Tt, Nθi)
10 Compute the distance from source task i to target task t′ where t′’s treatments are inverted treatments of t: 11 s−i = TAS(Xi, Ti, Xt, 1− Tt, Nθi) 12 Symmetrized distance: ssymi = min(s + i , s − i ) 13 return closest tasks: i∗ = argmin i ssymi
▷ Causal Knowledge Transfer
14 Finetune Nθ∗i with the target task’s data (Xt, Tt, Yt) 15 return Nθ∗i
8.4 DATASETS AND EXPERIMENTS DESCRIPTIONS
8.4.1 DATASETS
IHDP The IHDP dataset was first introduced by Hill (2011) based on real covariates available from the Infant Health and Development Program (IHDP), studying the effect of development programs on children. The features in this dataset come from a Randomized Control Trial, and the potential outcomes were simulated using Setting ”B” in (Hill, 2011), hence the word semi-synthetic. The dataset consists of 747 individuals (139 in the treatment group and 608 in the control group), each with 25 features. Hill generated the potential outcomes with Y0 ∼ N (exp(βT ·(X+W )), 1), where W has the same dimension as X with all values = 0.5 and Y1 ∼ N (βT (X+W )−ω, 1) with ω = 4. β is 25-element vector of regression coefficients randomly sampled from a categorical distribution with support (0, 0.1, 0.2, 0.3, 0.4) and respective probabilities µ = (0.6, 0.1, 0.1, 0.1, 0.1). We refer to the dataset generated according to these parameters as the base dataset.
We retain the base dataset and introduce 9 new settings according to Table 3 by varying µ and ω. We also generate 10 new datasets for each setting, each consisting of 747 individuals (139 in the treatment group and 608 in the control group) by running the same process but with different random samples of the aforementioned Gaussian distribution.
Jobs The original Jobs dataset (LaLonde, 1986) has 619 observations. The causal inference task is to learn the effect of participation/lack of participation in a specific professional training program (corresponding to receiving a treatment t = 1) at a time on the success in landing a job in the following three years. We generate a family of related datasets by randomly reverting the treatment assignments of the original dataset with various probabilities p ∈ [0, 1]. Specifically, to generate a dataset, we first choose a probability value p ∈ [0, 1], and then alter individuals (original) treatment assignment (i.e., 0 ↔ 1) with probability p. We choose values
p ∈ {0 = 0/9, 1/9, 2/9, 3/9, 4/9, 5/9, · · · , 9/9 = 1}. Clearly, p = 0 corresponds to the original dataset, and p = 1 corresponds to all reverted treatment assignments. We choose the original Jobs dataset (LaLonde, 1986) as the base dataset for our experiments, as discussed in Section 8.4.2.
Twins The Twins dataset was first introduced by Louizos et al. (2017) based on the collected data about twins’ births in the United States from 1989 to 1991. It is assumed that twins share significant parts of their features. We consider whether one of the twins was born heavier than the other as the treatment assignment and if he/she died in infancy (mortality) as the outcome. We divide the twins into two groups: In the treatment (respectively control) group, we consider the outcome for the heavier (respectively lighter) twin as factual. In both groups, the outcome for the remaining twin is assumed to be counterfactual.
We first construct a base dataset by selecting a set of 2000 pairs of twins from the original dataset (Louizos et al., 2017). Then, each element is assigned to the treatment group according to a Bernoulli experiment with the probability of success q = 0.75.
Next, the base dataset is used to generate more datasets. In an analogous manner to that of the Jobs dataset, we generate a family of related datasets by randomly reverting the treatment assignments of the base dataset (0 ↔ 1) with corresponding probabilities p ∈ {0, 0.1, 0.2, 0.3, 0.4, 0.5, · · · , 1}. For instance, to generate dataset i = 1, 2, · · · , 11, we let pi = (i− 1)/10 revert the individual treatment assignments in the base dataset Bernoulli experiment with probability of success pi. Clearly, p = 0 corresponds to the original dataset while p = 1 corresponds to all treatment assignments reverted.
RKHS We generate 100 Reproducing Kernel Hilbert Space (RKHS) datasets, each having 2000 data points. For each dataset, we start by generating the treatment and the control populations X1, X0 ∈ R4 respectively from Gaussian distributions N (µ1, I4) and N (µ0, I4). We sample µ1 ∈ R4 and µ0 ∈ R4 respectively according to Gaussian distributions N (e, I4) and N (−e, I4) where e = [1, 1, 1, 1]T is the all ones vector.
Subsequently, we generate the potential outcome functions f0 and f1 with a Radial Basis Function (RBF) kernel K(·, ·) as described next. Let γ0, γ1 ∈ R4 be two vectors sampled respectively from N (7e, I4) and N (9e, I4), and let λ ∈ N be sampled uniformly from {10, 11, . . . , 99, 100} For j ∈ {0, 1}:
1. We sample mj ∈ N according to Pois(λ) (e.g., the Poisson distribution with parameter λ),
2. For every i ∈ {1, . . . ,mj}, we sample xij according to N (γj , I4), and 3. The potential outcome functions fj , j = 0, 1 are constructed as fj(·) = ∑mj i=1 K(x i j , ·).
Given the potential outcome functions fj , j ∈ {0, 1}, the corresponding potential outcomes Y0 and Y1 are generated by:
Y0(x) = f0(x), for every x ∈ R4,
and Y1(x) = f1(x), for every x ∈ R4.
We will refer to the first constructed dataset in the above as the base dataset.
Note that in the above, all the generated potential outcome functions are in the same RKHS.
Heat (Physics) Consider a hot object left to cool off over time in a room with temperature T0. A person is likely to suffer a burn if he/she touches the object at time u.
The causal inference task of interest is the effect of room temperature T0 on the probability of suffering a burn. This family consists of 20 datasets; each includes 4000 observations with 2000 in each control and treatment group. The treatment in our setting is t = 1 when T0 = 5, and t = 0 when T0 = 25.
The treatment and control groups touching times are respectively sampled from two Chi-squared distributions χ2(5) and χ2(2) (intentionally in order to create artificial bias).
From the solution to Newton’s Heat Equation (Winterton, 1999) the underlying causal structure is governed by the equation
T (u) = C · exp(−ku) + T0 where T (u) is the temperature at time u and C, k > 0 are constants.
We let T0 = 25 and C = 75 for all the control groups in the datasets. Similarly, we let T0 = 5 and C = 95 for all the treatment groups in the datasets. We choose 20 values of k = {0.5, · · · , 2} uniformly spaced in [0.5, 2]. For each value of k, we generate a new dataset. The dataset corresponding to k = 0.5 is referred to as the base dataset.
Let T 0(u) and T 1(u) respectively denote the temperature at time u for the control and treatment groups. The potential outcomes Y0(u) and Y1(u) corresponding to the probability of suffering a burn at time t for respectively the control and treatment groups are given by
Yj(u) = max
( 1
75 (T j(u)− 25), 0 ) for j ∈ {0, 1}.
Movement (Physics) Consider a falling person in the air encountering air resistance. Opening her/his parachute can change the air resistance and control its descent velocity. The causal inference task of interest is the effect of the air resistance (e.g., with t = 1 or without parachute t = 0) on the object’s velocity at different times.
This family consists of 12 datasets. Each includes 4000 observations with 2000 in each treatment and control group. Here, the covariate is the time u, and the outcome is the velocity at time u. The treatment and control groups’ times are respectively sampled from two Chi-squared distributions χ2(2) and χ2(5) (intentionally in order to create artificial bias).
The underlying causal structure is governed by an ordinary differential equation (ODE) with the following analytical solution describing the velocity of a person at time u:
v(u) = g
C + (v0 −
g C )e−Cu (16)
where C = km , m and k are respectively the mass, and the air resistance constant, and g = 10 is the gravitational constant of earth. In the above v0 = v(0) is the initial velocity at time u = 0. We assume v0 = 0, corresponding to a free fall without initial velocity.
For the control group, we assume m = k = C = 1 and the potential outcome is calculated as Y0(u) = v(u) = 10 − e−u using the Equation 16. For the treatment groups, we vary m and k for different datasets with (5, 1), (5, 5), (5, 10), (5, 20), (10, 5), (10, 10), (10, 20), (20, 5), (20, 10), (20, 20), (50, 10), (50, 20). The potential outcomes Y1(u) is calculated from Equation 16. We have chosen the the dataset corresponding to (m, k) = (5, 1) as the base dataset.
8.4.2 DETAILS OF EXPERIMENTS
In this paper, we first create various causal inference tasks from the above families of datasets. For each family of datasets (e.g. IHDP, Jobs, Twins), the base task is created from its base dataset. Similarly, we construct the other tasks from the remaining datasets in that family. In order to study the effects of transfer learning on causal inference, we define the source tasks and the target tasks as follows:
• In the first experiment in Section 5.3, we choose the base task to be the source task and the other tasks to be the target tasks.
• In the second experiment in Section 5.2, we choose the base task to be the target task and the other tasks to be the source tasks.
8.5 PROOF OF LEMMAS AND THEOREMS
We will use the following known results (Shalit et al., 2016) for causal inference. The proofs for these results are given in (Shalit et al., 2016) .
For x ∈ X , t ∈ {0, 1}, with notational simplicity, we define
LTaΦ,h(x, t) = ∫ Y lh,Φ(x, t, y)P (Y Ta t = y|x)dy.
Theorem 2 (Bounding The Counterfactual Loss). Let Φ be an invertible representation with inverse Ψ. Let pt=iΦ = pϕ(r|t = i), i ∈ {0, 1} Let h : R× {0, 1} → Y be a hypothesis. Assume that there exists a constant BΦ > 0 such that for t = 0, 1, the function gΦ,h(r, t) := 1 BΦ · Lh,Φ(Ψ(r), t) ∈ G. Here, we have
ϵCF (h,Φ) ≤ (1− u)ϵt=1F (h,Φ) + uϵt=0F (h,Φ) +BΦ · IPMG ( pt=1Φ , p t=0 Φ ) . (17)
Theorem 3 (Bounding the ϵPEHE). The Expected Precision in Estimating Heterogeneous Treatment Effect ϵPEHE satisfies
ϵPEHE(h,Φ) ≤ 2 ( ϵCF (h,Φ) + ϵF (h,Φ)− 2σ2Y ) ≤ 2 ( ϵt=0F (h,Φ) + ϵ t=1 F (h,Φ) +BΦIPMG ( pt=1Φ , p t=0 Φ ) − 2σ2Y ) .
(18)
Next we relate the performance of target task ϵTa,t=0F (h,Φ) to that of a source task ϵ Sr,t=0 F (h,Φ). Without loss of generality, we present the proof for the case t = 0.
We make the following assumptions throughout the sequel.
1. Assumption 1: The loss function is non-negative, i.e. ℓTaΦ,h(x, t, y) ≥ 0 for all (x, t, y) ∈ (X × {0, 1} × Y),
2. Assumption 2: Φ is injective (thus Ψ = Φ−1 exists on Im(Φ)) (Shalit et al., 2016),
3. Assumption 3: There exists a real function space G on R = Im(Φ) and a constant BTaΦ such that the function r 7→ 1
BTaΦ · ℓTaΦ,h(Ψ(r), t, y) ∈ G.
4. Assumption 4: Causal Knowledge Transferability Assumption: There exists a function class G′ on Y such that y 7→ lΦ,h(x, t, y) ∈ G′ and IPMG′(P (Y Srt |x), P (Y Tat |x)) ≤ δ for t ∈ {0, 1}.
Proof of Lemma 1
ϵTa,t=0F (Φ, h)− ϵ Sr,t=0 F (Φ, h)
= ∫ X LTaΦ,h(x, 0)P (X Ta 0 = x)− LSrΦ,h(x, 0)P (XSr0 = x)dx
= ∫ X LTaΦ,h(x, 0)P (X Ta 0 = x)− LTaΦ,h(x, 0)P (XSr0 = x) + LTaΦ,h(x, 0)P (XSr0 = x)
− LSrΦ,h(x, 0)P (XSr0 = x)dx
= ∫ X LTaΦ,h(x, 0)P (X
Ta 0 = x)− LTaΦ,h(x, 0)P (XSr0 = x)dx︸ ︷︷ ︸
Γ
+ ∫ X ( LTaΦ,h(x, 0)− LSrΦ,h(x, 0) ) P (XSr0 = x)dx︸ ︷︷ ︸
Θ
We next upper bound Θ and Γ. To bound Θ, we use the following inequality:
LTaΦ,h(x, t)− LSrΦ,h(x, t) = ∫ Y ℓΦ,h(x, t, y) ( P (Y Tat = y|x)− P (Y Srt = y|x) ) dy
≤ maxf∈G′ ∣∣∣∣∣ ∫ Y f(y)P (Y Tat = y|x)− P (Y Srt = y|x)dy ∣∣∣∣∣ = IPMG′ ( P (Y Tat = y|x), P (Y Srt = y|x) ) ≤ δ
With the above inequality:
Θ = ∫ X ( LTaΦ,h(x, 0)− LSrΦ,h(x, 0) ) P (XSr0 = x)dx
≤ ∫ X δP (XSr0 = x)dx = δ ∫ X P (XSr0 = x)dx = δ
To bound Γ, we use the change of variable formula
Γ = ∫ R LTaΦ,h(x, 0)P (X Ta 0 = x)− LTaΦ,h(x, 0)P (XSr0 = x)dx
= ∫ X LTaΦ,h ( Ψ(r), 0 ) P ( Φ(XTa0 ) = r ) − LTaΦ,h ( Ψ(r), 0 ) P ( Φ(XSr0 ) = r ) dr
≤ BTaΦ ·maxg∈G ∣∣∣∣∣ ∫ R g(r) ( P ( Φ(XTa0 ) = r ) − P ( Φ(XSr0 ) = r )) dr ∣∣∣∣∣ = BTaΦ · IPMG ( P ( Φ(XTa0 ) ) , P ( Φ(XSr0
)) Combining the above upper bounds for Γ and Θ, we have
ϵTa,t=0F (Φ, h)− ϵ Sr,t=0 F (Φ, h) ≤ B Ta Φ · IPMG ( P ( Φ(XTa0 ) ) , P ( Φ(XSr0 ) )) + δ.
We conclude that
ϵTa,t=0F (Φ, h) ≤ ϵ Sr,t=0 F (Φ, h) +B Ta Φ · IPMG ( P ( Φ(XTa0 ) ) , P ( Φ(XSr0 ) )) + δ.
This concludes the proof.
Proof of Lemma 2 We apply Theorem 2 to establish an upper bound for the counterfactual loss of the target task and subsequently apply Lemma 1 .
ϵTaCF (h,Φ) ≤ ϵ Ta,t=1 F (h,Φ) + ϵ Ta,t=0 F (h,Φ) +B Ta Φ IPMG ( Φ(XTa0 ),Φ(X Ta 1 ) ) Therefore,
ϵTaCF (h,Φ) ≤ ϵ Sr,t=1 F (Φ, h) + ϵ Sr,t=0 F (Φ Sr, hS) + 2δ +BTΦSr · IPMG ( P ( Φ(XTa1 ) ) , P ( Φ(XSr1 ) )) +BTΦSr · IPMG ( P ( Φ(XTa0 ) ) , P ( Φ(XSr0 ) )) +BTΦSr · IPMG ( P ( ΦS(XTa0 ) ) , P ( ΦSr(XTa1 )
)) This concludes the proof.
Proof of Theorem 1 By applying Theorem 3, we get ϵTaPEHE(h,Φ) ≤ 2 ( ϵTa,t=0F (h,Φ) + ϵ Ta,t=1 F (h,Φ) +B Ta Φ IPMG ( P (Φ(XTa0 )), P (Φ(X Ta 1 )) )) After applying Lemma 1 to the first and second terms in the above:
ϵTaPEHE(Φ S , hS) ≤ 2 ( ϵSr,t=0F (Φ, h) +B Ta Φ · IPMG ( P (XTa0 ), P (X Sr 0 ) ) + δ + ϵSr,t=1F (Φ, h)
+BTaΦ · IPMG ( P (XTa1 ), P (X Sr 1 ) ) + δ + IPMG ( P ( ΦS(XT0 ) ) , P ( ΦS(XT1 ) )) Hence,
ϵTaPEHE(Φ S , hS) ≤ 2 ( ϵS,t=1F (Φ S , hS) + ϵS,t=0F (Φ S , hS) +BTΦSr · IPMG ( P ( Φ(XTa1 ) ) , P ( Φ(XSr1 ) )) +BTΦSr · IPMG ( P ( Φ(XTa0 ) ) , P ( Φ(XSr0 ) )) +BTΦS · IPMG ( P ( ΦS(XT0 ) ) , P ( ΦS(XT1 ) )) + 2δ ) .
This concludes the proof.
8.6 BASELINE: DATA BUNDLING
8.6.1 TRANSFER LEARNING SCENARIO: DIFFERENT POTENTIAL OUTCOMES
In many causal inference scenarios, we only have access to the trained model and the corresponding data is not available. For instance, in medical applications, this could be the case due to privacy reason. Consequently, bundling the datasets of source tasks is not feasible. In contrast, for some specific applications, the data may be available. In this case, we create another baseline referred to as data bundling.
In data bundling, we create the bundled dataset by combining the datasets of source tasks and the dataset of the target task. Below, we compare our approach with data bundling for the IHDP and the Movement(Physics) datasets. For data bunding, we report the model best performance, i.e. εPEHE , achieved by hyperparameter search. For our approach, we only report the performance of the model with the lowest training error. This gives more advantage to data bundling baseline. The results are summarized in Figure 5. Even with the aforementioned advantage, the data bundling method has poor performance. This may be due to data imbalance, lack of precision in determining similarity from propensity score, and differences in outcome functions.
8.6.2 SAME POTENTIAL OUTCOMES, DIFFERENT PROPENSITY SCORES
We compare the effectiveness of data bundling with that of transfer learning in scenarios where only the propensity scores are changing across tasks (i.e., same potential outcome functions). Figure 6 provides a summary of our findings. In this experiment, we generate synthetic source and target datasets, each having 1000 data points. For the source dataset, we generate the treatment XSr1 and the control XSr0 populations respectively from two Gaussian distributions N ((0, 0), I2) and N ((5, 5), 2·I2). Subsequently, we build the target dataset by adding noise to the source task samples. Specifically, we add standard Gaussian noise to the ith sample of the source task in order to generate the ith sample of the target task, i.e., xTai = x Sr i + ϵi where x Ta i and x Sr i are respectively the i th samples of the target task and the source task, and ϵi ∼ N ((0, 0), I2) is the additive noise. For every sample i ∈ {1, .., 1000}, we assign the treatment labels as follows:
• If the ith sample of the source task is in the treatment group, then the corresponding ith sample of the target task is also in the treatment group.
• If the ith sample of the source task is in the control group, then the corresponding ith sample of the target task is in the treatment group with probability p = 0.1, and in the control group with p = 0.9.
Subsequently, the output is defined based on the potential outcome functions f0 and f1 as follows:
f0(x) = 0.5× e(−0.005B T 0 x),
and f1(x) = 0.5× e(−0.03B T 1 x) + 5,
where B0, B1 ∈ R2 and their components are respectively sampled from N (0, 1) and N (4, 1). Please note that these are only sampled once, and these parameters are shared between the source and the target tasks. Our experiments (see Figure 6) suggest that even when only the propensity scores are different, transfer learning has better performance than bundling the source and the target datasets together.
8.7 TASK DISTANCE
Let PNθ (T , Dte) ∈ [0, 1] be a function that measures the performance of a given model Nθ parameterized by θ ∈ Rd on the test set Dte of the causal task T .
Definition 7 (ε-approximation Network). A model Nθ is called an ε-approximation network for a task-dataset pair (T , D) if it is trained using the training data Dtr such that PNθ (T , Dte) ≥ 1− ε, for a given 0 < ε < 1.
8.7.1 COMPARISON BETWEEN UNSYMMETRIZED AND SYMMETRIZED TASK DISTANCE
We compare the unsymmetrized and symmetrized task distances on the Jobs and the Twins dataset. Figure 7 shows that the proposed symmetrized task distance has successfully captured the symmetries within causal inference tasks. p (on the x-axis) denotes the probability of flipping treatment assignments of the original dataset. The altered datasets with p = 1 (i.e., the flipped dataset) and p = 0 (i.e., the original dataset) are the closest task to the original task (as expected). The altered dataset with p = 0.5 is the furthest dataset (as expected). Thus, the trend of the points is expected to resemble an inverted ’U’. We observe that the symmetrized task distance exhibits this trend. In contrast, the unsymmetrized task distance (in the right figures) fails to demonstrate this trend.
8.7.2 TASK DISTANCE BETWEEN COUNTERFACTUAL TASKS
In the following section, we denote the pair a = (Ta, Da) by aF = (TaF , DaF ) (respectively aCF = (TaCF , DaCF )) whenever Da is sampled from the factual (respectively counterfactual) distribution. We refer to (TaF , DaF ) and (TaCF , DaCF ) as the corresponding factual and counterfactual tasks. The following theorem proves that the order of proximity of tasks is preserved even if we go to a parallel universe where we observe the counterfactual tasks instead. In other words, a task, which is more similar to the target task when measured using factual data, remains more similar to the target task even when measured using counterfactual data. Theorem 4. Let T be the set of tasks and let aF = (TaF , DaF ), bF = (TbF , DbF ), and cF = (TcF , DcF ) be three factual tasks and aCF = (TaCF , DaCF ), bCF = (TbCF , DbCF ), and cCF = (TcCF , DcCF ) their corresponding counterfactual tasks. Suppose that there exists a class of neural networks N = {Nθ}θ∈Θ for which:
∀a, b, c ∈ T, s[a, b] ≤ s[a, c] + s[c, b] (19)
and the TAS between the factual and the counterfactual can be arbitrarily small
∀ϵ > 0,∃Nθ ∈ N , s[aF , aCF ] < ϵ (20)
Then we have the following result,
s[aF , bF ] ≤ s[aF , cF ] =⇒ s[aCF , bCF ] ≤ s[aCF , cCF ] (21)
Proof of Theorem 4 Suppose that s[aF , bF ] ≤ s[aF , cF ]. Then for every ϵ > 0 we have,
s[aCF , bCF ] ≤ s[aCF , aF ] + s[aF , bF ] + s[bF , bCF ] ≤ ϵ+ s[aF , cF ] + ϵ ≤ s[aF , aCF ] + s[aCF , cCF ] + s[cF , cCF ] + 2ϵ ≤ s[aCF , cCF ] + 4ϵ
This is true for every ϵ > 0, therefore s[aCF , bCF ] ≤ s[aCF , cCF ]. This concludes the proof. | 1. What is the focus of the paper regarding individual treatment effect estimation?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of theoretical results and empirical demonstrations?
3. Do you have any concerns regarding the task affinity measure and its connection to the symmetric group on the labels of treatments?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any minor issues or puzzling statements in the paper that the reviewer would like to bring attention to? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes to leverage transfer learning in the estimation of individual treatment effect (ITE) through representation learning, building on the TARNet approach developed by Shalit et al. (2016). The main contributions include a proposed measure of task affinity, some results on upper bounds of errors in the proposed approach, and empirical demonstrations of the reasonableness of the proposed task affinity measure and of the gain in sample efficiency.
Strengths And Weaknesses
Strengths:
The topic is interesting and important.
The paper presents both theoretical and empirical results (though they are not very tightly linked, see below.)
The gain in sample efficiency as shown in experiments is significant.
Weaknesses:
The theoretical results have quite limited originality and significance. The originality is limited, because as far as I can see, they are very simple adaptations of Shalit et al.'s results. The significance is limited because they involve the unknown parameter \delta (introduced in Assumption 4), and it is unclear how we can empirically estimate or get a handle on this parameter. (Is it supposed to be somehow related to the task affinity measure?)
The explanation of the rationale for symmetrizing the task affinity score is not sufficient. I am a little puzzled by the following remark in the abstract: "... the absolute values of ITEs are invariant under the action of the symmetric group on the labels of treatments. Given this invariance, we propose a symmetrized task distance for calculating the similarity of a target scenario with those encountered before." I am not sure why this invariance justifies or motivates the symmetrization, and I fail to find a discussion of this point in the main text. In any case, I think it would be helpful to compare the symmetrized and the unsymmetrized score, at least empirically. In the current paper, I don't find much insight or evidence regarding why the original, unsymmetrized task affinity score is less attractive than the measure proposed in this paper.
The connection between the theoretical results and the empirical studies seems rather loose. The theoretical upper bounds do not seem to be featured in any way in the experiments. I thought the proposed task affinity measure might be used to test Assumption 4 or indicate the value of \delta in Assumption 4, but I don't see such experimental results. On the other hand, there is no theoretical analysis of the expected gain in sample efficiency, as shown in the empirical results.
Clarity, Quality, Novelty And Reproducibility
The originality of the work is acceptable but not great. The symmetrizing idea seems to be the most novel bit, but its justification is a little too sketchy. The clarity could be improved. In addition to the lack of explanation of the motivation or rationale for the proposed measure of task distance, the methods actually used in the paper to estimate the task distance and to estimate the ITEs in the target task could be more clearly described. As it stands now, I am not sure how easy it is to reproduce the results. There are also some minor puzzling statements. For example, the statement of conditional unconfoundedness in the middle of p. 4 seems to me to be a mistake. At the bottom of p. 4, the direct identification of "small factual and counterfactual losses" with "low \epsilon_{PEHE}" is also confusing. Moreover, on p. 6, in the definition of the label-invariant task affinity distance, should F_{a, b} be F_{a, a} instead, as in Definition 5? |
ICLR | Title
Causal Knowledge Transfer from Task Affinity
Abstract
Recent developments in deep representation models through counterfactual balancing have led to a promising framework for estimating Individual Treatment Effects (ITEs). While Randomized Control Trials are vital to the understanding of causal effects, they are sometimes infeasible, costly, or unethical to conduct. Here, we focus on transferring the causal knowledge acquired in prior experiments to new scenarios where only limited data is available. We first provide regret bounds on the counterfactual loss and ITE error of the target task indicating the transferability of causal knowledge. We also observe that the absolute values of ITEs are invariant under the action of the symmetric group on the labels of treatments. Given this invariance, we propose a symmetrized task distance for calculating the similarity of a target scenario with those encountered before. The aforementioned task distance is then used to transfer causal knowledge from the closest of all the available previously learned tasks to the target scenario. Empirical studies are provided for various datasets demonstrating that the proposed symmetrized task distance is strongly related to the estimation of the counterfactual loss. Our results indicate that transferring causal knowledge reduces the amount of required data by up to 95% when compared to training from scratch.
N/A
Recent developments in deep representation models through counterfactual balancing have led to a promising framework for estimating Individual Treatment Effects (ITEs). While Randomized Control Trials are vital to the understanding of causal effects, they are sometimes infeasible, costly, or unethical to conduct. Here, we focus on transferring the causal knowledge acquired in prior experiments to new scenarios where only limited data is available. We first provide regret bounds on the counterfactual loss and ITE error of the target task indicating the transferability of causal knowledge. We also observe that the absolute values of ITEs are invariant under the action of the symmetric group on the labels of treatments. Given this invariance, we propose a symmetrized task distance for calculating the similarity of a target scenario with those encountered before. The aforementioned task distance is then used to transfer causal knowledge from the closest of all the available previously learned tasks to the target scenario. Empirical studies are provided for various datasets demonstrating that the proposed symmetrized task distance is strongly related to the estimation of the counterfactual loss. Our results indicate that transferring causal knowledge reduces the amount of required data by up to 95% when compared to training from scratch.
1 INTRODUCTION
One of the most remarkable characteristics of humans is their ability to transfer causal knowledge learned in a scenario to other similar situations. It is highly desirable for neural networks to have the same ability because of their numerous potential applications. For instance, mutations of old viruses often necessitate the development of new vaccines for treatment. To study the effect of new vaccine candidates, researchers need to collect data from randomized control trials, which is timeconsuming and expensive (Kaur & Gupta, 2020). If the mutated viruses can be related to old ones by a measure of similarity, then the effects of vaccine candidates can be quickly calculated based on this similarity with a small amount of data collected for the new scenario. In other words, transfer learning methods can help the research on the effects of various treatments (e.g., applications in medicines, personal training, social policy) progress much faster (Ebbehoj et al., 2022).
Recently, there has been significant progress in transfer learning, especially in computer vision and natural language processing applications (Wang & Deng, 2018; Alyafeai et al., 2020; Pan & Yang, 2010; Zhuang et al., 2021). While this is very promising, a challenge for transferring causal knowledge arises from statistical learning models’ vulnerability to non-causal correlations. For example, camels and horses often exist in images with different background colors, and a classifier may learn to use these colors to classify these objects (Arjovsky et al., 2019; Geirhos et al., 2019; Beery et al., 2018). A more critical challenge for transferring causal knowledge is that, in practice, the performance of the trained model for estimating ITEs can never be computed. This is because counterfactual data can never be collected as shown in Figure 1. This problem is known in the literature as the fundamental problem of causal inference (Rubin, 1974) and (Holland, 1986). For example, to compute the effect of vaccination on an individual at some given time, she/he must be both vaccinated and not be given the vaccine, which is obviously impossible. This contrasts with conventional supervised learning problems, where practitioners often use a separate validation set to estimate the true accuracy.
The aforementioned challenges imply that much attention must be paid to selecting the appropriate source model to transfer from in causal knowledge transfer. Additionally, the similarity of scenarios must be calculated using a distance related to variations of counterfactual loss between scenarios. This motivates our work in this paper, where we propose a task distance between causal inference scenarios. The task distance is then used for transferring causal knowledge, as shown in Figure 2. Our contributions can be summarized as follows:
1. For causal transfer learning scenarios, we establish new (to the best of our knowledge) regret bounds for the learning of counterfactual outcomes and ITEs for target tasks. These bounds prove the feasibility of transferring causal knowledge.
2. We observe a special property (symmetry) of causal inference tasks. Specifically, the absolute value of ITEs must be invariant to relabeling the treatment groups under the action of the symmetric group. Subsequently, we propose an intuitively appealing symmetrized Fisher task distance for which this property holds. While we construct the proposed task distance to satisfy this property mathematically, we also provide empirical evidence that it successfully lends itself to this symmetry in Section 5.3.
3. We provide both theoretical (e.g., Theorem 4) and empirical evidence (e.g., Figure 3) supporting the relevance of the symmetrized Fisher task distance to transferring causal knowledge. Through extensive experiments, we demonstrate that the proposed task affinity is highly correlated with the loss in estimating counterfactuals (not measurable in practice).
4. We present a representative set of causal inference datasets suitable for studying causal knowledge transfer. Some of these are well-established datasets in the literature, while others are derived from known causal relations in social sciences, physics, and mathematics.
5. We provide empirical evidence based on the above datasets that our methods can compute the ITEs for the target task with significantly fewer (up to 95% reduction) data points compared to the case where transfer learning is not performed.
2 MATHEMATICAL BACKGROUND
We first establish the notation and briefly review the required mathematical background.
2.1 CAUSAL INFERENCE
Let X ∈ X ⊂ Rd be the covariates (features), T ∈ {0, . . . ,M} be the treatment, and Y ∈ Y ⊂ R be the factual (observed) outcome. For every j ∈ {0, . . . ,M} we define Yj to be the Potential Outcome that would have been observed if only treatment T = j, j ∈ {0, 1, · · · ,M} was assigned. For example, in the medical context, X is the individual information (e.g. weight, heart rate, etc), T is the treatment assignment (e.g., t = 0 when the individual didn’t receive a vaccine, and t = 1 where he/she did), Y is the outcome (e.g mortality data). A causal inference dataset is given by a set of factual observations DF = {(xi, ti), yi}Ni=1, where N is the number of samples. We present our results for M = 1 (binary case) in the sequel. However, our approach immediately applies to any positive integer M < ∞. In the binary case, the individuals who received t = 0 (respectively t=1) are denoted by the control group (respectively the treatment group). Definition 1 (ITE). The Individual Treatment Effect also referred to as the Conditional Average Treatment Effect (CATE) (Imbens & Rubin, 2015), is defined as:
∀x ∈ X , τ(x) = E[Y1 − Y0|X = x] (1)
We assume that our data generation process respects overlap (i.e. 0 < p(t = 1|x) < 1 for all x ∈ X ) and conditional unconfoundedness (i.e. (Y 1, Y 0) ⊥⊥ T |X) (Robins, 1987). These assumptions are sufficient conditions for the ITE to be identifiable (Imbens, 2004). We also assume that a true underlying function f(x, t) describes the causal relationship. By definition τ(x) = f(x, 1)−f(x, 0). Let f̂(x, t) denote a hypothesis that estimates the true function f(x, t). Thus, the ITE function can then be estimated as τ̂(x) = f̂(x, 0)−f̂(x, 0). We let lf̂ (x, t, y) denote a loss function that quantifies the performance of f̂(·, ·). A possible example is lf̂ (x, t, y) = (y − f̂(x, t)) 2 (L2 loss) .
Definition 2 (Factual Loss). For a hypothesis f̂ and a corresponding loss function lf̂ we define the factual and counterfactual losses respectively as
ϵF (f̂) = ∫ X×{0,1}×Y lf̂ (x, t, y) p(x, t, y)dxdtdy (2)
We also define the factual loss for the treatment (t = 1) and control (t = 0) groups respectively as:
ϵt=1F (f̂) = ∫ X×Y lf̂ (x, 1, y) p(x, y|t = 1)dxdy (3)
and ϵt=0F (f̂) = ∫ X×Y lf̂ (x, 0, y) p(x, y|t = 0)dxdy (4)
Definition 3 (Counterfactual Loss). The counterfactual loss is defined as (Shalit et al., 2016)
ϵCF (f̂) = ∫ X×{0,1}×Y lf̂ (x, t, y) p(x, 1− t, y)dxdtdy (5)
Intuitively, the counterfactual loss corresponds to the expected loss value in a parallel universe where the roles of the control and treatment groups are exchanged. Definition 4. We define the Expected Precision in Estimating Heterogeneous Treatment Effect (PEHE) (Hill, 2011) as
εPEHE(f̂) = ∫ X (τ̂(x)− τ(x))2 p(x)dx. (6)
The value εPEHE is often used as the performance metric for estimation of ITEs as in (Shalit et al., 2016),(Hill, 2011), and (Johansson et al., 2016). Small factual and counterfactual losses are sufficient conditions for causal models to have good performance (i.e., low εPEHE) (Shalit et al.,
2016). Intuitively, this measures if a model has a good performance in predicting the effect both when the treatment is administered or not. Lower εPEHE also implies that the model is good for predicting the ITEs. We note that the above measures of performance are not directly accessible in causal inference scenarios, because the calculation of the ground truth ITE values requires access to counterfactual values. In this light, we may resort to selecting a hypothesis that optimizes an upper bound instead, such as the one given in the following section (see Equation 8).
2.2 TARNET AND COUNTERFACTUAL REGRESSION
TARNet (Shalit et al., 2016) has proven to be a successful framework for counterfactual balancing to estimate ITEs. It is defined as a pair of functions (Φ, h) where Φ : Rd → Rl is a representation function of the features and h : Rl × {0, 1} → R is a function learning the two potential outcomes functions in the representation space. The hypothesis learning the true causal function is: f̂(x, t) = h(Φ(x), t). We denote the loss function lf̂ by l(Φ,h). TARNet uses integral probability metric (IPM) defined as
IPMG(p, q) := sup g∈G ∣∣∣∣∫ S g(s)(p(s)− q(s))ds ∣∣∣∣ , (7) where the supremum is taken over a given class of functions G to measure the distance between distributions. It is a consequence of Kantorovich-Rubinstein duality Villani (2009) that IPM reduces to 1-Wassertein distance when G is the set of 1-Lipschtiz functions as is the case in our numerical experiments.
TARNet (Shalit et al., 2016) estimates the counterfactual outcomes by minimizing:
L(Φ, h) = 1 N N∑ i=1 wi · l(Φ,h)(xi, ti, yi) + α · IPMG ( {Φ (xi)}i:ti=0 , {Φ (xi)}i:ti=1 ) (8)
where wi = ti2u + 1−ti 2(1−u) , and u = 1 N ∑N i=1 ti. The parameter α is referred to as the balancing weight since it controls the trade-off between the similarity of the representations in the latent domain, and the performance of the model on the factual data.
3 TRANSERABILITY OF CAUSAL KNOWLEDGE
In this section, we use superscripts Ta and Sr to denote quantities related to target and source task respectively. Suppose that we have a model (ΦSr, hSr) trained on a source causal inference task. We apply the source model to a different target task. For notational simplicity, we denote P (Φ(X)|T = t) by P (Φ(Xt)) for t ∈ {0, 1}. We are interested in the performance of a welltrained source model when applied to a target task, i.e.
ϵTaPEHE(Φ Sr, hSr) = ∫ x∈X ( τTa(x)− [hSr(ΦSr(x), 1)− hSr(ΦSr(x), 0)] )2 P (XTa = x)dx
where τTa is the individual treatment effect function of the target, Φ is the representation learning function, and h is the potential outcomes hypothesis. While it is difficult to estimate, this error can have an upper bound that only involves obtainable quantities if we make reasonable assumptions about the relationship between the source and target task (defined in the Assumption 4 below). We make the following assumptions throughout this section:
1. Assumption 1: The loss function is non-negative, i.e. ℓTaΦ,h(x, t, y) ≥ 0 for all (x, t, y) ∈ (X × {0, 1} × Y),
2. Assumption 2: Φ is injective (thus Ψ = Φ−1 exists on Im(Φ)) (We borrow this assumption from (Shalit et al., 2016)),
3. Assumption 3: There exists a real function space G on R = Im(Φ) and a constant BTaΦ such that the function r 7→ 1
BTaΦ · ℓTaΦ,h(Ψ(r), t, y) ∈ G.
4. Assumption 4: Causal Knowledge Transferability Assumption: There exists a function class G′ on Y such that y 7→ lΦ,h(x, t, y) ∈ G′ and IPMG′(P (Y Srt |x), P (Y Tat |x)) ≤ δ for t ∈ {0, 1}.
Note that the causal knowledge transferability assumption implies that the outcome distributions (causal effects) of treatment t in source and target tasks need to be similar in order for transfer learning to be beneficial.
Our main Theorem guarantees that causal knowledge can be transferred and is proved using two Lemmas that are stated below. These lemmas provide upper bounds on the factual and counterfactual losses for transferring causal knowledge and may be by themselves of independent interest. The proofs of these Lemmas and that of the Theorem are provided in the Appendix 8.5. Lemma 1. (Factual Loss of Source Model on Target Task) Suppose that Assumptions 1-4 hold. The factual losses of any model (Φ, h) on source and target task satisfy:
∀t ∈ {0, 1}, ϵTa,tF (Φ, h) ≤ ϵ Sr,t F (Φ, h) +B Ta Φ · IPMG(P (Φ(XTat )), P (Φ(XSrt ))) + δ
Lemma 2. (Counterfactual Loss of Source Model on Target Task) Suppose that Assumptions 1-4 hold. The counterfactual losses of any model (Φ, h) on source and target task satisfy:
ϵTaCF (Φ, h) ≤ϵ Sr,t=1 F (Φ, h) + ϵ Sr,t=0 F (Φ, h S)
+BTaΦ · IPMG(P (Φ(XTa1 )), P (Φ(XSr1 ))) +BTaΦ · IPMG(P (Φ(XTa0 )), P (Φ(XSr0 ))) +BTaΦ · IPMG(P (Φ(XTa0 )), P (Φ(XTa1 ))) + 2δ
The above lemmas quantify the relationship between causality and transfer learning. In particular Lemma 2 bounds the inherently non-observable counterfactual loss by tractable quantities. Theorem 1. (Transferability of Causal Knowledge) Suppose that Assumptions 1-4 hold. The performance of source model on target task, i.e. ϵTaPEHE(Φ Sr, hSr), is upper bounded by:
ϵTaPEHE(Φ Sr, hSr) ≤2(ϵSr,t=1F (Φ Sr, hSr) + ϵSr,t=0F (Φ Sr, hSr)
+BTaΦSr · IPMG(P (Φ Sr(XTa1 )), P (Φ Sr(XSr1 )))
+BTaΦSr · IPMG(P (Φ Sr(XTa0 )), P (Φ Sr(XSr0 )))
+BTaΦSr · IPMG(P (Φ Sr(XTa0 )), P (Φ Sr(XTa1 )) + 2δ)
Theorem 1 implies that good performance on the target task is guaranteed if (1) the source model has a small factual loss (e.g., the first and second term in the upper bound) and (2) the distributions of the control and the treatment group features are similar in the latent domain (the rest three terms in the upper bound). This upper bound provides us with a sufficient condition for transfer learning in causal inference scenarios, indicating the transferability of causal knowledge. Please note that these regret bounds can be applied to any transfer learning framework that involves a pair of tasks.
4 SYMMETRIZED TASK AFFINITY FOR CAUSAL INFERENCE TASKS
While these regret bounds indicate the transferability of causal knowledge between any pair of causal inference tasks, they don’t provide a constructive way to choose the best source task to transfer from, when multiple source tasks exist. The order of performance of different models and that of their upper bounds are not necessarily the same. Hence, we propose a label-invariant task affinity that finds the closest source task. Moreover, this task affinity satisfies the symmetry property (see section 4.3) of causal inference tasks. Our new task affinity is built on the Fisher task distance (FTD). We first give a brief introduction to FTD, then we propose a symmetrized Fisher task distance for causal inference tasks.
4.1 TASK REPRESENTATION
The ordered pair of a causal task T and its dataset D = (X,T ) will be denoted by (T , D), where dataset D itself consists of pair of covariates and their assigned treatments.
We will mathematically formalize a sufficiently well-trained deep network representing a causal task-dataset pair (T , D) in the Appendix 8.7. From now on, we assume that all the previously trained models are sufficiently well-trained networks.
4.2 FISHER TASK DISTANCE
Here, we recall the definition of the Fisher Information matrix for a neural network, and well-defined Fisher task distance (Achille et al., 2019; Le et al., 2021b; 2022b). Definition 5 (Fisher Information Matrix). For a neural network Nθa with weights θa trained on data Da, a given test dataset Db and the negative log-likelihood loss function L(θ,D), the Fisher Information matrix is defined as:
Fa,b = ED∼Db [ ∇θL(θa, D)∇θL(θa, D)T ] = −ED∼Db [ H ( L(θa, D) )] , (9)
where H is the Hessian matrix, i.e., H ( L(θ,D) ) = ∇2θL(θ,D), and expectation is taken w.r.t the data. It can proved that Fisher Information Matrix is asymptotically well-defined (Le et al., 2022b). In practice, we approximate the above with the empirical Fisher Information matrix:
F̂a,b = 1 |Db| ∑ d∈Db ∇θL(θa, d)∇θL(θa, d)T . (10)
Here, the empirical Fisher Information Matrix is positive semi-definite because it is the summation of positive semi-definite terms, regardless of the number of samples. For completeness, we next review the task affinity score (Le et al., 2021b). Definition 6 (Task Affinity Score (TAS)). Let (Ta, Da) and (Tb, Db) respectively denote the source and target task-dataset pairs. Let Da = Dtra ∪ Dtea (respectively Db = Dtrb ∪ Dteb ) with Dtra (respectively Dtrb ) and D te a (respectively D te b ) be the training and test sets of dataset Da (respectively Db), where the training for Ta is performed using the source representation network Nθa . Consider the Fisher information matrix H ( L(θ,Da) ) of Nθa with test data D te a . Let Fa,a be the diagonal
matrix of absolute values of elements of major diagonal of H ( L(θ,Da) ) normalized to have unit trace. Let Fa,b be constructed in an analogous manner but using the training data Dtrb (instead of Dtea ). The TAS from the source task Ta to the target task Tb is defined as:
s[a, b] = 1√ 2 ∥∥∥F 1/2a,a − F 1/2a,b ∥∥∥ F
(11)
It can be proved that 0 ≤ TAS ≤ 1 where TAS = 0 denotes extreme similarity and TAS = 1 indicates extreme disimilarity. In Appendix 8.5, we prove under stringent assumptions that the order of TAS between candidate source tasks and the target task are preserved when a parallel universe experiment is performed in which the roles of the control and treatment groups are exchanged.
4.3 LABEL-INVARIANT TASK AFFINITY
Symmetry Property of Causal Inference Tasks Causal inference tasks can be considered as having multiple regression problems, one for each treatment group. Given a source task, if we alternate the treatment labels (i.e., 0 to 1 and 1 to 0), the treatment effect (i.e., E[Y1−Y0|X]) will be negated. Consequently, the unsymmetrized task distance (Le et al., 2022b) between the original task and the permuted task can be very large. However, the original model does not need to be retrained for transfer learning as we only need to permute the roles of output layers of the model to predict the individual treatment effects correctly for each group. In other words, the causal distance between these two permuted tasks must be zero. The following proposed label-invariant task affinity lends itself to this property of causal inference tasks.
Our causal inference tasks are represented by TARNet type networks. We also restrict to the case, where all causal tasks under consideration have the same number of treatment labels M (e.g., M = 2). Let (Ta, Da) (respectively (Tb, Db)) with Da = (Xa, Ta, Ya) (respectively Db = (Xb, Tb, Yb)) be the source (respectively target) causal inference tasks. Clearly Ta, Tb ∈ {0, 1, . . . ,M}. Consider the symmetric group SM+1 consisting of all permutations of labels {0, 1, . . . ,M}. For σ ∈ SM+1, let Tσ(b) denote the permutation of the target treatment labels under the action of σ. Let dσ =
1√ 2 ∥∥∥F 1/2a,a − F 1/2a,σ(b)∥∥∥ F , then
ssym[a, b] = min σ∈SM+1 (dσ)
is the label-invariant task affinity distance between causal tasks Ta and Tb (The pseudocode is provided in the Appendix 8.3). It follows from the above definition that the order of closeness of tasks under label-invariant task affinity closeness is robust to the architectural choice of the representation networks since task affinity distance has been shown to enjoy this property (Le et al., 2022b).
5 EXPERIMENTAL RESULTS
We first describe the datasets we have used for our empirical studies. Subsequently, we present empirical results about quantifying the gains of transfer learning for causal inference, demonstrating the strong correlation between the proposed task distance and the counterfactual loss, and showing that the proposed task distance identifies the symmetries within causal inference tasks.
5.1 CAUSAL INFERENCE DATASETS
We present a representative family of causal inference datasets suitable for studying causal knowledge transfer. Some of these are well-established datasets in the literature, while others are motivated by known causal structures in diverse areas such as social sciences, physics, health, and mathematics. Table 1 provides a brief description of the datasets used in our studies. A more detailed description is provided in Appendix 8.4.1. For each dataset, a number of corresponding causal inference tasks exist, which can be used to study transfer learning scenarios. Please note that we can only access the counterfactual data of the synthetic/semi-synthetic datasets (i.e., IHDP, RKHS, Movement, Heat). We are not in possession of the counterfactual data of the real-world datasets (i.e., Twins, Jobs).
5.2 COMPARISON OF PERFORMANCE WITH/WITHOUT TRANSFER LEARNING
Here we briefly discuss our experiments quantifying the impact of transferring causal knowledge on the size of required training data. In this experiment, we use Heat (Physics), Movement (Physics), IHDP, and RKHS datasets for which the counterfactual outcomes are available. We first fix a target causal inference task. For a wide range of balancing weights (α), we record the values of εPEHE for the training of the model from scratch while increasing the size of training datasets (at the end training process). In this process, the training datasets are slowly expanded such that smaller training sets are subsets of larger ones. We then report the minimum εPEHE achieved for each dataset size. For the Target task, we identify the closest source task and repeat the above process with a small amount of target task data. We then compare the performance with and without transfer learning to quantify the amount of data needed by transfer learning models to achieve the best possible performance without transferring causal knowledge. The results are summarized in Table 2, which demonstrates that transferring causal knowledge decreases the required amount of training data in this setting by a percentage between 75% and 95%.
5.3 TASK DISTANCE AND COUNTERFACTUAL LOSS
Here, we show empirically the strong correlation between task distance (which only uses available data) and counterfactual loss (which is impossible to measure perfectly except for synthetic datasets). We show in Figure 3 that for different balancing weights α (see Equation 8), the correlation between
task distance and counterfactual error on the IHDP, RKHS, Movement(Physics), and Heat(Physics) datasets for which counterfactuals are known. It is intuitively appealing and empirically observed that the task distance and the counterfactual loss have a strong correlation: the model of a source task has a smaller counterfactual loss on the target data if the target task is closer (in terms of the proposed task distance). Note that the points in Figure 3for different values of α (i.e., balancing weight) are extremely close. This shows that the proposed task affinity not only indicates counterfactual loss, but also is robust to change of hyper-parameters. This is a highly desirable property, especially in causal inference scenarios where no validation data can be accessed to cross-validate the hyper-parameters. Our numerical results for the Jobs and Twins datasets verify that the proposed task distance can capture the symmetries within causal inference problems. We flip treatment labels (0 and 1) with probability p (without any changes to the features and the outcomes) independently for each control and treatment data point. In Figure 4, we depict the trend of the symmetrized task distance between the original and the altered dataset by varying p, p ∈ [0, 1]. The symmetry of task distance is evident (with some deviation due to limited training data for calculating the task distance). The altered dataset with p = 1 is the closest to the original dataset (as it should be) since we have completely flipped the treatment assignments. The altered dataset with p = 0.5 is the furthest (as it should be), since we have randomly shuffled the control and the treatment groups. For all datasets, it can also be observed that the task distance trends are robust to variations in the balancing weight.
6 RELATED WORK
In the setting of transfer learning (Pan & Yang, 2010; Zhuang et al., 2021), prior learned models are used to increase the learning efficiency and decrease the required data. For instance, the parameters from a trained model may be used as initialization values for the target task. Many approaches in transfer learning (Thrun & Pratt, 1998; Blum & Mitchell, 1998; Silver & Bennett, 2008; Razavian et al., 2014; Finn et al., 2016; Fernando et al., 2017; Rusu et al., 2016) have been proposed, analyzed and applied in various machine learning applications. Transfer learning techniques inherently assume that prior knowledge in the selected source model helps learn a target task. In other words, these methods often do not consider the selection of the base task to perform knowledge transfer. Consequently, in some rare cases, transfer learning may even degrade the performance of
the model Standley et al. (2020). In order to avoid potential performance loss during knowledge transfer to a target task, task affinity (or task similarity) is considered as a selection method that identifies a group of closest base candidates from the set of the prior learned tasks. Task affinity has been recently investigated and applied to various domains, such as transfer learning (Zamir et al., 2018; Dwivedi & Roig, 2019; Achille et al., 2019; Wang et al., 2019), neural architecture search (Le et al., 2021a; 2022a; Le et al., 2021), few-shot learning (Pal & Balasubramanian, 2019; Le et al., 2022b), multi-task learning (Standley et al., 2020), and continual learning (Kirkpatrick et al., 2017; Chen et al., 2018). The related prior learned tasks are identified with similarity measures and then employed for knowledge transfer. Task affinity is inherently a non-commutative measure as it may be straightforward to transfer the knowledge from a more comprehensive task to a simpler task than the other way around (Le et al., 2021b).
While transfer learning and task affinity have been investigated in numerous application areas, their applications to causal inference have not been fully developed. Neyman-Rubin Causal Model and Pearl’s Do-calculus are two popular frameworks for causal studies based on different perspectives. A central question in these frameworks is determining conditions for identifiability of causal quantities such as Average and Individual Treatment Effects. Past work considered estimators for Average Treatment Effect based on various methods such as Covariate Adjustment (a.k.a back-door adjustment) (Pearl, 2009; Rubin, 1978), weighting methods such as those utilizing propensity scores (Rosenbaum & Rubin, 1983), and Doubly Robust estimators (Funk et al., 2011). With the emergence of Machine Learning (ML) techniques, more recent approaches to causal inference include the applications of decision trees(Wager & Athey, 2015), Gaussian Processes (Alaa & van der Schaar, 2017) and Generative Modeling (Yoon et al., 2018) to ITE estimation. In particular, deep neural networks have successfully learned ITEs and estimated counterfactual outcomes by data balancing in the latent domain (Johansson et al., 2016; Shalit et al., 2016). It is important to note that the transportability of causal graphs is another closely related field that has been well-studied in the causality literature (Bareinboim & Pearl, 2021). It studies transferring knowledge of causal relationships in Pearl’s do-calculus framework. In contrast, in this paper we are interested in transferring knowledge of Individual Treatment Effects from a source task to a target task in the Neyman-Rubin framework using representation learning.
7 CONCLUSION
In this paper, we provided theoretical analysis proving the transferability of causal knowledge and proposed a method for causal transfer learning based on a task affinity framework. To this end, we constructed a new task distance suitable for measuring the similarity of causal inference tasks. Given a new causal inference task, we transferred the causal knowledge from the closest available trained task. Extensive Simulations on a representative family of datasets provide empirical evidence demonstrating the gains of our method and the efficacy of the proposed symmetrized task distance. Reductions as much as 95% in the amount of required training data for new scenarios were observed.
8 APPENDIX
Here, we provide a simple example to help understand causal inference dataset, the pseudocode, the datasets description, the theorems, the proofs for those theorems, and other supplementary materials.
8.1 REPRODUCIBILITY STATEMENT
In the supplementary material, we have included our codes that implement TARNet and the proposed task distance.
8.2 CAUSAL INFERENCE: AN EXAMPLE
Let X ∈ X be the features (e.g., age, sex, etc..), the treatment variable T ∈ {A,B} be the indicator representing if the subject received vaccine A or B, and Y ∈ Y indicates the mortality outcome. The main challenge of causal inference arises from the absence of counterfactual observations. We do not observe the outcomes of individuals upon receiving treatment A if they have received treatment B (and vice versa). The subjects who received vaccine A may be significantly different from those who received treatment B. This is commonly called selection bias (e.g. elderly people are more likely to receive treatment A than young people). Thus estimating the counterfactual effects is challenging due to this unbalance between the treatment groups. Let f̂(x, t) be a hypothesis modeling the outcome for an individual x if he received treatment t. The factual loss is defined as
ϵF (f̂) = ∫ X×{A,B}×Y lf̂ (x, t, y) p(x, t, y)dxdtdy (12)
By Bayes rule, we can write the factual loss as
ϵF (f̂) = ∫ X×Y lf̂ (x,A, y) p(x, y|t = A)p(t = A)dxdy
+ ∫ X×Y lf̂ (x,B, y) p(x, y|t = B)p(t = B)dxdy
= p(t = A) ∫ X×Y lf̂ (x,A, y) p(x, y|t = A)dxdy
+ (1− p(t = A)) ∫ X×Y lf̂ (x,B, y) p(x, y|t = B)dxdy
= p(t = A)ϵt=AF (f̂) + (1− p(t = A)) ϵt=BF (f̂)
Where we define the factual loss for the group who received vaccine A to be
ϵt=AF (f̂) = ∫ X×Y lf̂ (x,A, y) p(x, y|t = A)dxdy (13)
Respectively, the factual loss for the group who received vaccine B is
ϵt=BF (f̂) = ∫ X×Y lf̂ (x,B, y) p(x, y|t = B)dxdy (14)
Let us now consider a parallel universe where the treatment assignments are flipped (those who received vaccine A receive vaccine B and vice versa). The performance of our hypothesis f̂ in this parallel universe is the counterfactual loss, defined as:
ϵCF (f̂) = ∫ X×{A,B}×Y lf̂ (x, t, y) p(x, 1− t, y)dxdtdy (15)
8.3 PESUDOCODE FOR SYMMETRIZED TASK DISTANCE
The pseudocode for our proposed task affinity is given in Algorithm 1.
Algorithm 1: Label-Invariant Task Affinity Score for Causal Inference Data: Source tasks: S = {(X1, T1, Y1), . . . , (Xm, Tm, Ym)}, Target task: (Xt, Tt, Yt) Input: TARNet models Nθ1 , Nθ2 , . . . , Nθm Output: TARNet model for the target task t
1 Function TAS(Xa, Ta, Xb, Tb, Nθa): 2 Compute Fa,a using Nθa with Xa, Ta 3 Compute Fa,b using Nθa with Xb, Tb 4 return s[a, b] = 1√ 2 ∥∥∥F 1/2a,a − F 1/2a,b ∥∥∥ F
5 Function Main: ▷ Find the closest tasks in S 6 for i = 1, 2, . . . ,m do 7 Train Nθi for source task i using (Xi, Ti, Yi) 8 Compute the distance from source task i to target task t: 9 s+i = TAS(Xi, Ti, Xt, Tt, Nθi)
10 Compute the distance from source task i to target task t′ where t′’s treatments are inverted treatments of t: 11 s−i = TAS(Xi, Ti, Xt, 1− Tt, Nθi) 12 Symmetrized distance: ssymi = min(s + i , s − i ) 13 return closest tasks: i∗ = argmin i ssymi
▷ Causal Knowledge Transfer
14 Finetune Nθ∗i with the target task’s data (Xt, Tt, Yt) 15 return Nθ∗i
8.4 DATASETS AND EXPERIMENTS DESCRIPTIONS
8.4.1 DATASETS
IHDP The IHDP dataset was first introduced by Hill (2011) based on real covariates available from the Infant Health and Development Program (IHDP), studying the effect of development programs on children. The features in this dataset come from a Randomized Control Trial, and the potential outcomes were simulated using Setting ”B” in (Hill, 2011), hence the word semi-synthetic. The dataset consists of 747 individuals (139 in the treatment group and 608 in the control group), each with 25 features. Hill generated the potential outcomes with Y0 ∼ N (exp(βT ·(X+W )), 1), where W has the same dimension as X with all values = 0.5 and Y1 ∼ N (βT (X+W )−ω, 1) with ω = 4. β is 25-element vector of regression coefficients randomly sampled from a categorical distribution with support (0, 0.1, 0.2, 0.3, 0.4) and respective probabilities µ = (0.6, 0.1, 0.1, 0.1, 0.1). We refer to the dataset generated according to these parameters as the base dataset.
We retain the base dataset and introduce 9 new settings according to Table 3 by varying µ and ω. We also generate 10 new datasets for each setting, each consisting of 747 individuals (139 in the treatment group and 608 in the control group) by running the same process but with different random samples of the aforementioned Gaussian distribution.
Jobs The original Jobs dataset (LaLonde, 1986) has 619 observations. The causal inference task is to learn the effect of participation/lack of participation in a specific professional training program (corresponding to receiving a treatment t = 1) at a time on the success in landing a job in the following three years. We generate a family of related datasets by randomly reverting the treatment assignments of the original dataset with various probabilities p ∈ [0, 1]. Specifically, to generate a dataset, we first choose a probability value p ∈ [0, 1], and then alter individuals (original) treatment assignment (i.e., 0 ↔ 1) with probability p. We choose values
p ∈ {0 = 0/9, 1/9, 2/9, 3/9, 4/9, 5/9, · · · , 9/9 = 1}. Clearly, p = 0 corresponds to the original dataset, and p = 1 corresponds to all reverted treatment assignments. We choose the original Jobs dataset (LaLonde, 1986) as the base dataset for our experiments, as discussed in Section 8.4.2.
Twins The Twins dataset was first introduced by Louizos et al. (2017) based on the collected data about twins’ births in the United States from 1989 to 1991. It is assumed that twins share significant parts of their features. We consider whether one of the twins was born heavier than the other as the treatment assignment and if he/she died in infancy (mortality) as the outcome. We divide the twins into two groups: In the treatment (respectively control) group, we consider the outcome for the heavier (respectively lighter) twin as factual. In both groups, the outcome for the remaining twin is assumed to be counterfactual.
We first construct a base dataset by selecting a set of 2000 pairs of twins from the original dataset (Louizos et al., 2017). Then, each element is assigned to the treatment group according to a Bernoulli experiment with the probability of success q = 0.75.
Next, the base dataset is used to generate more datasets. In an analogous manner to that of the Jobs dataset, we generate a family of related datasets by randomly reverting the treatment assignments of the base dataset (0 ↔ 1) with corresponding probabilities p ∈ {0, 0.1, 0.2, 0.3, 0.4, 0.5, · · · , 1}. For instance, to generate dataset i = 1, 2, · · · , 11, we let pi = (i− 1)/10 revert the individual treatment assignments in the base dataset Bernoulli experiment with probability of success pi. Clearly, p = 0 corresponds to the original dataset while p = 1 corresponds to all treatment assignments reverted.
RKHS We generate 100 Reproducing Kernel Hilbert Space (RKHS) datasets, each having 2000 data points. For each dataset, we start by generating the treatment and the control populations X1, X0 ∈ R4 respectively from Gaussian distributions N (µ1, I4) and N (µ0, I4). We sample µ1 ∈ R4 and µ0 ∈ R4 respectively according to Gaussian distributions N (e, I4) and N (−e, I4) where e = [1, 1, 1, 1]T is the all ones vector.
Subsequently, we generate the potential outcome functions f0 and f1 with a Radial Basis Function (RBF) kernel K(·, ·) as described next. Let γ0, γ1 ∈ R4 be two vectors sampled respectively from N (7e, I4) and N (9e, I4), and let λ ∈ N be sampled uniformly from {10, 11, . . . , 99, 100} For j ∈ {0, 1}:
1. We sample mj ∈ N according to Pois(λ) (e.g., the Poisson distribution with parameter λ),
2. For every i ∈ {1, . . . ,mj}, we sample xij according to N (γj , I4), and 3. The potential outcome functions fj , j = 0, 1 are constructed as fj(·) = ∑mj i=1 K(x i j , ·).
Given the potential outcome functions fj , j ∈ {0, 1}, the corresponding potential outcomes Y0 and Y1 are generated by:
Y0(x) = f0(x), for every x ∈ R4,
and Y1(x) = f1(x), for every x ∈ R4.
We will refer to the first constructed dataset in the above as the base dataset.
Note that in the above, all the generated potential outcome functions are in the same RKHS.
Heat (Physics) Consider a hot object left to cool off over time in a room with temperature T0. A person is likely to suffer a burn if he/she touches the object at time u.
The causal inference task of interest is the effect of room temperature T0 on the probability of suffering a burn. This family consists of 20 datasets; each includes 4000 observations with 2000 in each control and treatment group. The treatment in our setting is t = 1 when T0 = 5, and t = 0 when T0 = 25.
The treatment and control groups touching times are respectively sampled from two Chi-squared distributions χ2(5) and χ2(2) (intentionally in order to create artificial bias).
From the solution to Newton’s Heat Equation (Winterton, 1999) the underlying causal structure is governed by the equation
T (u) = C · exp(−ku) + T0 where T (u) is the temperature at time u and C, k > 0 are constants.
We let T0 = 25 and C = 75 for all the control groups in the datasets. Similarly, we let T0 = 5 and C = 95 for all the treatment groups in the datasets. We choose 20 values of k = {0.5, · · · , 2} uniformly spaced in [0.5, 2]. For each value of k, we generate a new dataset. The dataset corresponding to k = 0.5 is referred to as the base dataset.
Let T 0(u) and T 1(u) respectively denote the temperature at time u for the control and treatment groups. The potential outcomes Y0(u) and Y1(u) corresponding to the probability of suffering a burn at time t for respectively the control and treatment groups are given by
Yj(u) = max
( 1
75 (T j(u)− 25), 0 ) for j ∈ {0, 1}.
Movement (Physics) Consider a falling person in the air encountering air resistance. Opening her/his parachute can change the air resistance and control its descent velocity. The causal inference task of interest is the effect of the air resistance (e.g., with t = 1 or without parachute t = 0) on the object’s velocity at different times.
This family consists of 12 datasets. Each includes 4000 observations with 2000 in each treatment and control group. Here, the covariate is the time u, and the outcome is the velocity at time u. The treatment and control groups’ times are respectively sampled from two Chi-squared distributions χ2(2) and χ2(5) (intentionally in order to create artificial bias).
The underlying causal structure is governed by an ordinary differential equation (ODE) with the following analytical solution describing the velocity of a person at time u:
v(u) = g
C + (v0 −
g C )e−Cu (16)
where C = km , m and k are respectively the mass, and the air resistance constant, and g = 10 is the gravitational constant of earth. In the above v0 = v(0) is the initial velocity at time u = 0. We assume v0 = 0, corresponding to a free fall without initial velocity.
For the control group, we assume m = k = C = 1 and the potential outcome is calculated as Y0(u) = v(u) = 10 − e−u using the Equation 16. For the treatment groups, we vary m and k for different datasets with (5, 1), (5, 5), (5, 10), (5, 20), (10, 5), (10, 10), (10, 20), (20, 5), (20, 10), (20, 20), (50, 10), (50, 20). The potential outcomes Y1(u) is calculated from Equation 16. We have chosen the the dataset corresponding to (m, k) = (5, 1) as the base dataset.
8.4.2 DETAILS OF EXPERIMENTS
In this paper, we first create various causal inference tasks from the above families of datasets. For each family of datasets (e.g. IHDP, Jobs, Twins), the base task is created from its base dataset. Similarly, we construct the other tasks from the remaining datasets in that family. In order to study the effects of transfer learning on causal inference, we define the source tasks and the target tasks as follows:
• In the first experiment in Section 5.3, we choose the base task to be the source task and the other tasks to be the target tasks.
• In the second experiment in Section 5.2, we choose the base task to be the target task and the other tasks to be the source tasks.
8.5 PROOF OF LEMMAS AND THEOREMS
We will use the following known results (Shalit et al., 2016) for causal inference. The proofs for these results are given in (Shalit et al., 2016) .
For x ∈ X , t ∈ {0, 1}, with notational simplicity, we define
LTaΦ,h(x, t) = ∫ Y lh,Φ(x, t, y)P (Y Ta t = y|x)dy.
Theorem 2 (Bounding The Counterfactual Loss). Let Φ be an invertible representation with inverse Ψ. Let pt=iΦ = pϕ(r|t = i), i ∈ {0, 1} Let h : R× {0, 1} → Y be a hypothesis. Assume that there exists a constant BΦ > 0 such that for t = 0, 1, the function gΦ,h(r, t) := 1 BΦ · Lh,Φ(Ψ(r), t) ∈ G. Here, we have
ϵCF (h,Φ) ≤ (1− u)ϵt=1F (h,Φ) + uϵt=0F (h,Φ) +BΦ · IPMG ( pt=1Φ , p t=0 Φ ) . (17)
Theorem 3 (Bounding the ϵPEHE). The Expected Precision in Estimating Heterogeneous Treatment Effect ϵPEHE satisfies
ϵPEHE(h,Φ) ≤ 2 ( ϵCF (h,Φ) + ϵF (h,Φ)− 2σ2Y ) ≤ 2 ( ϵt=0F (h,Φ) + ϵ t=1 F (h,Φ) +BΦIPMG ( pt=1Φ , p t=0 Φ ) − 2σ2Y ) .
(18)
Next we relate the performance of target task ϵTa,t=0F (h,Φ) to that of a source task ϵ Sr,t=0 F (h,Φ). Without loss of generality, we present the proof for the case t = 0.
We make the following assumptions throughout the sequel.
1. Assumption 1: The loss function is non-negative, i.e. ℓTaΦ,h(x, t, y) ≥ 0 for all (x, t, y) ∈ (X × {0, 1} × Y),
2. Assumption 2: Φ is injective (thus Ψ = Φ−1 exists on Im(Φ)) (Shalit et al., 2016),
3. Assumption 3: There exists a real function space G on R = Im(Φ) and a constant BTaΦ such that the function r 7→ 1
BTaΦ · ℓTaΦ,h(Ψ(r), t, y) ∈ G.
4. Assumption 4: Causal Knowledge Transferability Assumption: There exists a function class G′ on Y such that y 7→ lΦ,h(x, t, y) ∈ G′ and IPMG′(P (Y Srt |x), P (Y Tat |x)) ≤ δ for t ∈ {0, 1}.
Proof of Lemma 1
ϵTa,t=0F (Φ, h)− ϵ Sr,t=0 F (Φ, h)
= ∫ X LTaΦ,h(x, 0)P (X Ta 0 = x)− LSrΦ,h(x, 0)P (XSr0 = x)dx
= ∫ X LTaΦ,h(x, 0)P (X Ta 0 = x)− LTaΦ,h(x, 0)P (XSr0 = x) + LTaΦ,h(x, 0)P (XSr0 = x)
− LSrΦ,h(x, 0)P (XSr0 = x)dx
= ∫ X LTaΦ,h(x, 0)P (X
Ta 0 = x)− LTaΦ,h(x, 0)P (XSr0 = x)dx︸ ︷︷ ︸
Γ
+ ∫ X ( LTaΦ,h(x, 0)− LSrΦ,h(x, 0) ) P (XSr0 = x)dx︸ ︷︷ ︸
Θ
We next upper bound Θ and Γ. To bound Θ, we use the following inequality:
LTaΦ,h(x, t)− LSrΦ,h(x, t) = ∫ Y ℓΦ,h(x, t, y) ( P (Y Tat = y|x)− P (Y Srt = y|x) ) dy
≤ maxf∈G′ ∣∣∣∣∣ ∫ Y f(y)P (Y Tat = y|x)− P (Y Srt = y|x)dy ∣∣∣∣∣ = IPMG′ ( P (Y Tat = y|x), P (Y Srt = y|x) ) ≤ δ
With the above inequality:
Θ = ∫ X ( LTaΦ,h(x, 0)− LSrΦ,h(x, 0) ) P (XSr0 = x)dx
≤ ∫ X δP (XSr0 = x)dx = δ ∫ X P (XSr0 = x)dx = δ
To bound Γ, we use the change of variable formula
Γ = ∫ R LTaΦ,h(x, 0)P (X Ta 0 = x)− LTaΦ,h(x, 0)P (XSr0 = x)dx
= ∫ X LTaΦ,h ( Ψ(r), 0 ) P ( Φ(XTa0 ) = r ) − LTaΦ,h ( Ψ(r), 0 ) P ( Φ(XSr0 ) = r ) dr
≤ BTaΦ ·maxg∈G ∣∣∣∣∣ ∫ R g(r) ( P ( Φ(XTa0 ) = r ) − P ( Φ(XSr0 ) = r )) dr ∣∣∣∣∣ = BTaΦ · IPMG ( P ( Φ(XTa0 ) ) , P ( Φ(XSr0
)) Combining the above upper bounds for Γ and Θ, we have
ϵTa,t=0F (Φ, h)− ϵ Sr,t=0 F (Φ, h) ≤ B Ta Φ · IPMG ( P ( Φ(XTa0 ) ) , P ( Φ(XSr0 ) )) + δ.
We conclude that
ϵTa,t=0F (Φ, h) ≤ ϵ Sr,t=0 F (Φ, h) +B Ta Φ · IPMG ( P ( Φ(XTa0 ) ) , P ( Φ(XSr0 ) )) + δ.
This concludes the proof.
Proof of Lemma 2 We apply Theorem 2 to establish an upper bound for the counterfactual loss of the target task and subsequently apply Lemma 1 .
ϵTaCF (h,Φ) ≤ ϵ Ta,t=1 F (h,Φ) + ϵ Ta,t=0 F (h,Φ) +B Ta Φ IPMG ( Φ(XTa0 ),Φ(X Ta 1 ) ) Therefore,
ϵTaCF (h,Φ) ≤ ϵ Sr,t=1 F (Φ, h) + ϵ Sr,t=0 F (Φ Sr, hS) + 2δ +BTΦSr · IPMG ( P ( Φ(XTa1 ) ) , P ( Φ(XSr1 ) )) +BTΦSr · IPMG ( P ( Φ(XTa0 ) ) , P ( Φ(XSr0 ) )) +BTΦSr · IPMG ( P ( ΦS(XTa0 ) ) , P ( ΦSr(XTa1 )
)) This concludes the proof.
Proof of Theorem 1 By applying Theorem 3, we get ϵTaPEHE(h,Φ) ≤ 2 ( ϵTa,t=0F (h,Φ) + ϵ Ta,t=1 F (h,Φ) +B Ta Φ IPMG ( P (Φ(XTa0 )), P (Φ(X Ta 1 )) )) After applying Lemma 1 to the first and second terms in the above:
ϵTaPEHE(Φ S , hS) ≤ 2 ( ϵSr,t=0F (Φ, h) +B Ta Φ · IPMG ( P (XTa0 ), P (X Sr 0 ) ) + δ + ϵSr,t=1F (Φ, h)
+BTaΦ · IPMG ( P (XTa1 ), P (X Sr 1 ) ) + δ + IPMG ( P ( ΦS(XT0 ) ) , P ( ΦS(XT1 ) )) Hence,
ϵTaPEHE(Φ S , hS) ≤ 2 ( ϵS,t=1F (Φ S , hS) + ϵS,t=0F (Φ S , hS) +BTΦSr · IPMG ( P ( Φ(XTa1 ) ) , P ( Φ(XSr1 ) )) +BTΦSr · IPMG ( P ( Φ(XTa0 ) ) , P ( Φ(XSr0 ) )) +BTΦS · IPMG ( P ( ΦS(XT0 ) ) , P ( ΦS(XT1 ) )) + 2δ ) .
This concludes the proof.
8.6 BASELINE: DATA BUNDLING
8.6.1 TRANSFER LEARNING SCENARIO: DIFFERENT POTENTIAL OUTCOMES
In many causal inference scenarios, we only have access to the trained model and the corresponding data is not available. For instance, in medical applications, this could be the case due to privacy reason. Consequently, bundling the datasets of source tasks is not feasible. In contrast, for some specific applications, the data may be available. In this case, we create another baseline referred to as data bundling.
In data bundling, we create the bundled dataset by combining the datasets of source tasks and the dataset of the target task. Below, we compare our approach with data bundling for the IHDP and the Movement(Physics) datasets. For data bunding, we report the model best performance, i.e. εPEHE , achieved by hyperparameter search. For our approach, we only report the performance of the model with the lowest training error. This gives more advantage to data bundling baseline. The results are summarized in Figure 5. Even with the aforementioned advantage, the data bundling method has poor performance. This may be due to data imbalance, lack of precision in determining similarity from propensity score, and differences in outcome functions.
8.6.2 SAME POTENTIAL OUTCOMES, DIFFERENT PROPENSITY SCORES
We compare the effectiveness of data bundling with that of transfer learning in scenarios where only the propensity scores are changing across tasks (i.e., same potential outcome functions). Figure 6 provides a summary of our findings. In this experiment, we generate synthetic source and target datasets, each having 1000 data points. For the source dataset, we generate the treatment XSr1 and the control XSr0 populations respectively from two Gaussian distributions N ((0, 0), I2) and N ((5, 5), 2·I2). Subsequently, we build the target dataset by adding noise to the source task samples. Specifically, we add standard Gaussian noise to the ith sample of the source task in order to generate the ith sample of the target task, i.e., xTai = x Sr i + ϵi where x Ta i and x Sr i are respectively the i th samples of the target task and the source task, and ϵi ∼ N ((0, 0), I2) is the additive noise. For every sample i ∈ {1, .., 1000}, we assign the treatment labels as follows:
• If the ith sample of the source task is in the treatment group, then the corresponding ith sample of the target task is also in the treatment group.
• If the ith sample of the source task is in the control group, then the corresponding ith sample of the target task is in the treatment group with probability p = 0.1, and in the control group with p = 0.9.
Subsequently, the output is defined based on the potential outcome functions f0 and f1 as follows:
f0(x) = 0.5× e(−0.005B T 0 x),
and f1(x) = 0.5× e(−0.03B T 1 x) + 5,
where B0, B1 ∈ R2 and their components are respectively sampled from N (0, 1) and N (4, 1). Please note that these are only sampled once, and these parameters are shared between the source and the target tasks. Our experiments (see Figure 6) suggest that even when only the propensity scores are different, transfer learning has better performance than bundling the source and the target datasets together.
8.7 TASK DISTANCE
Let PNθ (T , Dte) ∈ [0, 1] be a function that measures the performance of a given model Nθ parameterized by θ ∈ Rd on the test set Dte of the causal task T .
Definition 7 (ε-approximation Network). A model Nθ is called an ε-approximation network for a task-dataset pair (T , D) if it is trained using the training data Dtr such that PNθ (T , Dte) ≥ 1− ε, for a given 0 < ε < 1.
8.7.1 COMPARISON BETWEEN UNSYMMETRIZED AND SYMMETRIZED TASK DISTANCE
We compare the unsymmetrized and symmetrized task distances on the Jobs and the Twins dataset. Figure 7 shows that the proposed symmetrized task distance has successfully captured the symmetries within causal inference tasks. p (on the x-axis) denotes the probability of flipping treatment assignments of the original dataset. The altered datasets with p = 1 (i.e., the flipped dataset) and p = 0 (i.e., the original dataset) are the closest task to the original task (as expected). The altered dataset with p = 0.5 is the furthest dataset (as expected). Thus, the trend of the points is expected to resemble an inverted ’U’. We observe that the symmetrized task distance exhibits this trend. In contrast, the unsymmetrized task distance (in the right figures) fails to demonstrate this trend.
8.7.2 TASK DISTANCE BETWEEN COUNTERFACTUAL TASKS
In the following section, we denote the pair a = (Ta, Da) by aF = (TaF , DaF ) (respectively aCF = (TaCF , DaCF )) whenever Da is sampled from the factual (respectively counterfactual) distribution. We refer to (TaF , DaF ) and (TaCF , DaCF ) as the corresponding factual and counterfactual tasks. The following theorem proves that the order of proximity of tasks is preserved even if we go to a parallel universe where we observe the counterfactual tasks instead. In other words, a task, which is more similar to the target task when measured using factual data, remains more similar to the target task even when measured using counterfactual data. Theorem 4. Let T be the set of tasks and let aF = (TaF , DaF ), bF = (TbF , DbF ), and cF = (TcF , DcF ) be three factual tasks and aCF = (TaCF , DaCF ), bCF = (TbCF , DbCF ), and cCF = (TcCF , DcCF ) their corresponding counterfactual tasks. Suppose that there exists a class of neural networks N = {Nθ}θ∈Θ for which:
∀a, b, c ∈ T, s[a, b] ≤ s[a, c] + s[c, b] (19)
and the TAS between the factual and the counterfactual can be arbitrarily small
∀ϵ > 0,∃Nθ ∈ N , s[aF , aCF ] < ϵ (20)
Then we have the following result,
s[aF , bF ] ≤ s[aF , cF ] =⇒ s[aCF , bCF ] ≤ s[aCF , cCF ] (21)
Proof of Theorem 4 Suppose that s[aF , bF ] ≤ s[aF , cF ]. Then for every ϵ > 0 we have,
s[aCF , bCF ] ≤ s[aCF , aF ] + s[aF , bF ] + s[bF , bCF ] ≤ ϵ+ s[aF , cF ] + ϵ ≤ s[aF , aCF ] + s[aCF , cCF ] + s[cF , cCF ] + 2ϵ ≤ s[aCF , cCF ] + 4ϵ
This is true for every ϵ > 0, therefore s[aCF , bCF ] ≤ s[aCF , cCF ]. This concludes the proof. | 1. What is the focus and contribution of the paper on causal transfer learning?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its novelty and reproducibility?
3. Do you have any concerns or suggestions regarding the paper's structure, layout, and exposition?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any specific questions or points raised by the reviewer that require further explanation or justification from the authors? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors present a new method for causal transfer learning based on task affinity frameworks. They derive a new metric which is able to measure the similarity between two causal inference tasks. They show how this contribution is able to transfer knowledge from 'close' tasks to the present one, reducing the need for training data by as much as "95%" in some cases.
Strengths And Weaknesses
This is the main review. It proceeds according to the section headings of your paper.
Abstract
I appreciate that you have a lot to say but the abstract is meant to be but a brief summary of your method and results. Please compress it, it is too detailed for a quick 'skim' (which is what a lot of your readers will be doing -- if it is too long they will not even bother, so it is a fine balance to strike).
Introduction
You make a lot of (interesting) claims in the first paragraph but do not a single reference for any one of them. Transportability is a well-studied concept in causal inference (and as 'transfer learning' outside of causal inference too, as your second paragraph demonstrates). To lend some credence to the start of your paper, please furnish it with more sources to add some veracity to your narrative.
In particular, you say "natural neural systems can intuitively determine the similarity of scenarios and estimate average gains of action in a new scenario from those of past experiences" - that is a very large claim which at present is unsupported.
I like your camel and horses example. Very well put.
Nitpicking, figure 1: the quotation marks around 'flipped' should be done using `` and '' to properly render in latex.
Sentence construction, legend, figure 1: "we cannot have validation dataset" --> 'we are not in possession of a validation dataset'.
What you are talking about in figure 1 and paragraph 2 of this section is known as the fundamental problem of causal inference (Rubin 1974; Holland 1986). You should mention that for consistency and indexing.
Shouldn't your causal diagrams be different in figure 2 if they are indeed different tasks? What similarity are you measuring if they are the same causal diagram?
In item three you say: "These bounds formally imply transferability for causal knowledge." - that is an odd sentence, could you explain better what you mean? It sounds as if the existence of the bounds implies that causal knowledge can de facto be transported.
Related work
The structure of your paper lends itself well to placing the related work after your contributions (method/theory etc). It will improve the flow of the paper.
Transfer learning goes back much further than recent applications to neural networks. Sebastian Thrun discussed it as early as 1997. My meaning is that don't get too overly focused on deep learning utility and be aware that it is a much more general concept.
Very comprehensive, well done.
Mathematical background
What is a causal inference dataset? Is there a source? Do you mean observational data (sampled without the influence of an intervention)? Is there a source for this background?
T
=
0
is your observational setting then, the empty-set intervention or no intervention (this is the do-calculus perspective).
How should we thinking about treatment assignment when
M
>
1
- is this the setting where multiple types of treatments are assigned rather than then binary case where it is either or?
Provide a source for definition 1.
Provide a source for definition 2 (unless it is a novel contribution in which case you should state this clearly).
It would be cleaner if you separated the factual and the counterfactual loss into two separate definitions.
Give the losses equation numbers so that they can properly referenced.
It is very confusing that the factual loss for the treatment and control renders a different expression, compared to the factual loss definition higher up in the definition. Where does the conditional probability come from? Why is it not defined as such in first definition of the loss? In this setting, if I have understood correctly, you are defining the integral over the measured space
X
×
0
,
1
×
Y
where
M
=
0
,
1
. Is that correct? So you are defining the loss function for the binary case rather than the more general case
M
?
You say the PEHE is "often used as the performance metric for estimation of ITEs" - do you have a source or sources for this?
Please give the IPM an equation number.
What is the class of functions in G, in the IPM?
Give a source for definition 4 and equation 3.
If you use \text{TAS} in math-mode latex will render it nicer than if you simply write
T
A
S
as the engine interprets the letters as variables and hence italicises them.
Lest I have missed something your contribution starts on page 6. Consequently you have spent this space introducing, reviewing and providing the necessary tools for your method. I would suggest that this is not a good use of space and that you should considerably revise the layout of the manuscript. It is not useful for your own method to have 2/3 of the paper spent on topics which are not your method and contributions. You can compress a lot of your sections, remove some definitions (and just provide a source if you go down that route) - I also recommend that you re-write the body to be more compressed and less verbose. You can also move a lot of items (e.g. definitions) to the appendix.
PROPOSED METHOD AND TRANSFERABILITY
Why do all tasks have the same treatment labels?
Where does
Ψ
come from in assumption 2; what is
Ψ
?
Experimental results
Redo the legend for table 1 in the format 'total number of tasks for the dataset (#task)' it will be less cluttered and easier for the reader.
For figure 3, it would be helpful to know what a good trend should look like. Is it good that all alphas lie on the line
y
=
x
in figure 3 (RKHS)?
What does it mean for the balancing weights in figure 3 (movement) to overlap? Similarly for heat.
Your figures are difficult to interpret with the current explanation that goes with them. It is not immediately clear what one should be looking for (or what one should not be looking) in order to make a useful judgement as to validity/utillity of your proposed method. Perhaps if you picked one dataset and described it in much more detail it would be more clear what you are trying to show.
Table 2 is much more understandable than the figures. I would suggest you lead with that and then go with the figures.
Change the legend for table 2 using the same recommendation as for table 1.
Clarity, Quality, Novelty And Reproducibility
Clarity
The paper is very densely written. There is a lot of material required to understand the method that the authors are proposing. I would recommend a rewrite to make the narrative easier to follow and move material around which is not necessary to understand the main contribution.
Quality
The writing and figures are of high quality but the narrative and exposition are, as said, difficult to follow owing to the copy-editing choices the authors have made.
Novelty
Difficult to measure since they do not compare to any other method. Though the results are impressive (especially table 2) it is hard to say if similar results could have been found using another method.
Reproducibility
The authors have not mentioned the release of any implementation nor code, consequently, this work is at present not reproducible. |
ICLR | Title
Selfish Sparse RNN Training
Abstract
Sparse neural networks have been widely applied to reduce the necessary resource requirements to train and deploy over-parameterized deep neural networks. For inference acceleration, methods that induce sparsity from a pre-trained dense network (dense-to-sparse) work effectively. Recently, dynamic sparse training (DST) has been proposed to train sparse neural networks without pre-training a large and dense network (sparse-to-sparse), so that the training process can also be accelerated. However, previous sparse-to-sparse methods mainly focus on Multilayer Perceptron Networks (MLPs) and Convolutional Neural Networks (CNNs), failing to match the performance of dense-to-sparse methods in Recurrent Neural Networks (RNNs) setting. In this paper, we propose an approach to train sparse RNNs with a fixed parameter count in one single run, without compromising performance. During training, we allow RNN layers to have a non-uniform redistribution across cell weights for a better regularization. Further, we introduce SNT-ASGD, a variant of the averaged stochastic gradient optimizer, which significantly improves the performance of all sparse training methods for RNNs. Using these strategies, we achieve state-of-the-art sparse training results, even better than dense model results, with various types of RNNs on Penn TreeBank and Wikitext-2 datasets.
1 INTRODUCTION
Recurrent neural networks (RNNs) (Elman, 1990), with a variant of long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997), have been highly successful in various fields, including language modeling (Mikolov et al., 2010), machine translation (Kalchbrenner & Blunsom, 2013), question answering (Hirschman et al., 1999; Wang & Jiang, 2017), etc. As a standard task to evaluate models’ ability to capture long-range context, language modeling has witnessed great progress in RNNs. Mikolov et al. (2010) demonstrated that RNNs perform much better than backoff models for language modeling. After that, various novel RNN architectures such as Recurrent Highway Networks (RHNs) (Zilly et al., 2017), Pointer Sentinel Mixture Models (Merity et al., 2017), Neural Cache Model (Grave et al., 2017), Mixture of Softmaxes (AWD-LSTM-MoS) (Yang et al., 2018), ordered neurons LSTM (ON-LSTM) (Shen et al., 2019), and effective regularization like variational dropout (Gal & Ghahramani, 2016), weight tying (Inan et al., 2017), DropConnect (Merity et al., 2018) have been proposed to significantly improve the performance of RNNs.
At the same time, as the performance of deep neural networks (DNNs) improves, the resources required to train and deploy deep models are becoming prohibitively large. To tackle this problem, various dense-to-sparse methods have been developed, including but not limited to pruning (LeCun et al., 1990; Han et al., 2015), Bayesian methods (Louizos et al., 2017a; Molchanov et al., 2017), distillation (Hinton et al., 2015), L1 Regularization (Wen et al., 2018), and low-rank decomposition (Jaderberg et al., 2014). Given a pre-trained model, these methods work effectively to accelerate the inference. Recently, some dynamic sparse training (DST) approaches (Mocanu et al., 2018; Mostafa & Wang, 2019; Dettmers & Zettlemoyer, 2019; Evci et al., 2020) have been proposed to bring efficiency for both, the training phase and the inference phase by dynamically changing the sparse connectivity during training. However, previous approaches are mainly for CNNs. For RNNs, the long-term dependencies and repetitive usage of recurrent cells make them more difficult to be sparsified (Kalchbrenner et al., 2018; Evci et al., 2020). More importantly, the state-of-the-art performance achieved by RNNs on language modeling is mainly associated with the optimizer, averaged stochastic gradient descent (ASGD) (Polyak & Juditsky, 1992), which is not compatible with the existing DST approaches. The above-mentioned problems heavily limit the performance
of the off-the-shelf sparse training methods in the RNN field. For instance, while “The Rigged Lottery” (RigL) achieves state-of-the-art sparse training results with various CNNs, it fails to match the performance of the iterative pruning method in the RNN setting (Evci et al., 2020). In this paper, we introduce an algorithm to train sparse RNNs with a fixed number of computational costs throughout training. We abbreviate our sparse RNN training method as Selfish-RNN because our method encourages cell weights to obtain their parameters selfishly. The main contributions of this work are five-fold:
• We propose an algorithm to train sparse RNNs from scratch with a fixed number of parameters. This advantage constrains the training costs to a fraction of the costs needed for training a dense model, allowing us to choose suitable sparsity levels for different types of training platforms. • We introduce SNT-ASGD, a sparse variant of the non-monotonically triggered averaged
stochastic gradient descent optimizer, which overcomes the over-sparsified problem of the original NT-ASGD (Merity et al., 2018) caused by dynamic sparse training. • We demonstrate state-of-the-art sparse training performance with various RNN models,
including stacked LSTMs (Zaremba et al., 2014), RHNs, ordered neurons LSTM (ONLSTM) on Penn TreeBank (PTB) dataset (Marcus et al., 1993) and AWD-LSTM-MoS on WikiText-2 dataset (Melis et al., 2018). • We present an approach to analyze the evolutionary trajectory of the sparse connectivity
optimized by dynamic sparse training from the perspective of graph. With this approach, we show that there exist many good structural local optima (sparse sub-networks having equally good performance) in RNNs, which can be found in an efficient and robust manner. • Our analysis shows two surprising phenomena in the setting of RNNs contrary to CNNs:
(1) random-based weight growth performs better than gradient-based weight growth, (2) uniform sparse distribution performs better than Erdős-Rényi (ER) sparse initialization. These results highlight the need to choose different sparse training methods for different architectures.
2 RELATED WORK
Dense-to-Sparse. There are a large amount of works operating on a dense network to yield a sparse network. We divide them into three categories based on the training cost in terms of memory and computation. (1) Iterative Pruning and Retraining. To the best of our knowledge, pruning was first proposed by Janowsky (1989) and Mozer & Smolensky (1989) to yield a sparse network from a pre-trained network. Recently, Han et al. (2015) brought it back to people’s attention based on the idea of iterative pruning and retraining with modern architectures. Some recent works were proposed to further reduce the number of iterative retraining e.g., Narang et al. (2017); Zhu & Gupta (2017). Frankle & Carbin (2019) proposed the Lottery Ticket Hypothesis showing that the sub-networks (“winning tickets”) obtained via iterative pruning combined with their “lucky” initialization can outperform the dense networks. Zhou et al. (2019) discovered that the sign of their initialization is the crucial factor that
makes the “winning tickets” work. Our work shows that there exists a much more efficient and robust way to find those “winning ticketts” without any special initialization. The aforementioned methods require at least the same training cost as training a dense model, sometimes even more, as a pre-trained dense model is involved. We compare our method with state-of-the-art pruning method proposed by Zhu & Gupta (2017) in Appendix I. With fewer training costs, our method is able to discover sparse networks that can achieve lower test perplexity than iterative pruning. (2) Learning Sparsity During Training. There are also some works attempting to learn the sparse networks during training. Louizos et al. (2017b) and Wen et al. (2018) are examples that gradually enforce the network weights to zero via L0 and L1 regularization, respectively. Dai et al. (2018) proposed a singular value decomposition (SVD) based method to accelerate the training process for LSTMs. Liu et al. (2020a) proposed Dynamic Sparse Training to discover sparse structure by learning binary masks associated with network weights. However, these methods start with a fully dense network, and hence are not memory efficient. (3) One-Shot Pruning. Some works aim to find sparse neural networks by pruning once prior to the main training phase based on some salience criteria, such as connection sensitivity (Lee et al., 2019), signal propagation, (Lee et al., 2020), and gradient signal preservation (Wang et al., 2020). These techniques can find sparse networks before the standard training, but at least one iteration of dense model needs to be trained to identify the sparse sub-networks, and therefore the pruning process is not applicable to memory-limited scenarios. Additionally, one-shot pruning generally cannot match the performance of dynamic sparse training, especially at extreme sparsity levels (Wang et al., 2020).
Sparse-to-Sparse. Recently, many works have emerged to train intrinsically sparse neural networks from scratch to obtain efficiency both for training and inference. (1) Static Sparse Training. Mocanu et al. (2016) introduced intrinsically sparse networks by exploring the scale-free and small-world topological properties in Restricted Boltzmann Machines. Later, some works expand static sparse training into CNNs based on expander graphs and show comparable performance (Prabhu et al., 2018; Kepner & Robinett, 2019). (2) Dynamic Sparse Training. Mocanu et al. (2018) introduced Sparse Evolutionary Training (SET) which initializes a sparse network and dynamically changes the sparse connectivity by a simple remove-and-regrow strategy. At the same time, DeepR (Bellec et al., 2018) trained very sparse networks by sampling the sparse connectivity based on a Bayesian posterior. The iterative configuration updates have been proved to converge to a stationary distribution. Mostafa & Wang (2019) introduced Dynamic Sparse Reparameterization (DSR) to train sparse neural networks while dynamically adjusting the sparsity levels of different layers. Sparse Networks from Scratch (SNFS) (Dettmers & Zettlemoyer, 2019) improved the sparse training performance by growing free weights according to their momentum. It requires extra computation and memory to update the dense momentum tensor for each iteration. Further, Evci et al. (2020) introduced RigL which activates weights with the highest magnitude gradients. This approach grows weights expected to receive gradients with high magnitudes, while amortizing a large number of memory requirements and computational cost caused by momentum. Due to the inherent limitations of deep learning software and hardware libraries, all of the above works simulate sparsity using a binary mask over weights. More recently, Liu et al. (2020b) proved the potentials of DST by developing for the first time an independent software framework to train very large truly sparse MLPs trained with SET. However, all these works mainly focus on CNNs and MLPs, and they are not designed to match state-of-the-art performance for RNNs.
We summarize the properties of all approaches compared in this paper in Table 1. Same with SET, our method can guarantee Backward Sparse, which does not require any extra information from the removed weights. Additionally, we discuss the differences among SET, pruning techniques, and our method in Appendix H.
3 SPARSE RNN TRAINING
Our sparse RNN training method is illustrated in Figure 1 with LSTM as a specific case of RNNs. Note that our method can be easily applied to any other RNN variants. The only difference is the number of cell weights. Before training, we randomly initialize each layer at the same sparsity (the fraction of zero-valued weights), so that the training costs are proportional to the dense model at the beginning. To explore more sparse structures, while to maintain a fixed sparsity level, we need to optimize the sparse connectivity together with the corresponding weights (a combinatorial optimization problem). We apply dynamic sparse connectivity and SNT-ASGD to handle this combinatorial optimization problem. The pseudocode of the full training procedure of our algorithm is shown in Algorithm 1.
3.1 DYNAMIC SPARSE CONNECTIVITY
We consider uniform sparse initialization, magnitude weight removal, random weight growth, cell weight redistribution together as main components of our dynamic sparse connectivity method, which can ensure a fixed number of parameters and a clear sparse backward pass, as discussed next. Notation. Given a dataset of N samples D = {(xi, yi)}Ni=1 and a network f(x; θ) parameterized by θ. We train the network to minimize the loss function ∑N i=1 L(f(xi; θ), yi). The basic mechanism of sparse neural networks is to use a fraction of parameters to reparameterize the whole network, while preserving the performance as much as possible. Hence, a sparse neural network can be denoted as fs(x; θs) with a sparsity level S = 1− ‖θs‖0‖θ‖0 , where ‖ · ‖0 is the `0-norm. Uniform Sparse Initialization. First, the network is uniformly initialized with a sparse distribution in which the sparsity level of each layer is the same S. More precisely, the network is initialized by:
θs = θ M (1)
where θ is a dense weight tensor initialized in a standard way; M is a binary tensor, in which nonzero elements are sampled uniformly based on the sparsity S; refers to the Hadamard product. Magnitude Weight Removal. For non-RNN layers, we use magnitude weight removal followed by random weight growth to update the sparse connectivity. We remove a fraction p of weights with the smallest magnitude after each training epoch. This step is performed by changing the binary tensor M , as follows: M =M − P (2) where P is a binary tensor with the same shape as M , in which the nonzero elements have the same indices with the top-p smallest-magnitude nonzero weights in θs, with ||P ||0 = p||M ||0. Random Weight Growth. To keep a fixed parameter count, we randomly grow the same number of weights immediately after weight removal, by:
M =M +R (3)
where R is a binary tensor where the nonzero elements are randomly located at the position of zero elements of M . We choose random growth to get rid of using any information of the non-existing weights, so that both feedforward and backpropagation are completely sparse. It is more desirable to have such pure sparse structures as it enables the possibility of conceiving in the future specialized hardware accelerators for sparse neural networks. Besides, our analysis of growth methods in Section 4.3 shows that random growth can explore more sparse structural degrees of freedom than gradient growth, which might be crucial to the sparse training. Cell Weight Redistribution. Our dynamic sparse connectivity differs from previous methods mainly in cell weight redistribution. For RNN layers, the naive approach is to sparsify all cell weight tensors independently at the same sparsity, as shown in Liu et al. (2019) which is a straightforward extension of applying SET to RNNs. Essentially, it is more desirable to redistribute new parameters to cell weight tensors dependently, as all cell weight tensors collaborate together to regulate information. Intuitively, we redistribute new parameters in a way that weight tensors containing more largemagnitude weights should have more parameters. Large-magnitude weights indicate that their loss
gradients are large and few oscillations occur. Thus, weight tensors with more large-magnitude connections should be reallocated with more parameters to accelerate training. Concretely, for each RNN layer l, we remove weights dependently given by an ascending sort:
Sortp(|θl1|, |θl2|, .., |θlt|) (4)
where {θl1, θl2, ..., θlt} are all weight tensors within each cell, and Sortp returns p indices of the smallest-magnitude weights. After weight removal, new parameters are uniformly grown to each weight tensor to implement our cell weight redistribution gradually. We also tried other approaches including the mean value of the magnitude of nonzero weights or the mean value of the gradient magnitude of nonzero weights, but our approach achieves the best performance, as shown in Appendix B. We further demonstrate the final sparsity breakdown of cell weights learned by our method in Appendix M and observe that weights of forget gates are consistently sparser than other weights for all models. Note that redistributing parameters across cell weight tensors does not change the FLOP counting, as the sparsity of each layer is not changed. In contrast, the across-layer weight redistribution used by DSR and SNFS affects the sparsity level of each layer. As a result, it will change the number of floating-point operations (FLOPs).
Similar with SNFS, We also decay the removing rate p to zero with a cosine annealing. We further use Eq. (1) to enforce the sparse structure before the forward pass and after the backward pass, so that the zero-valued weights will not contribute to the loss. And all the newly activated weights are initialized to zero.
3.2 SPARSE NON-MONOTONICALLY TRIGGERED ASGD
Non-monotonically Triggered ASGD (NT-ASGD) has been shown to achieve surprising performance with various RNNs (Merity et al., 2018; Yang et al., 2018; Shen et al., 2019). However, it becomes less appealing for sparse RNNs training. Unlike dense networks in which every parameter in the model is updated at each iteration, for sparse networks, the zero-valued weights remain zero when they are not activated. Once these zero-valued weights are activated, the original averaging operation of standard NT-ASGD will immediately bring them close to zero. Thereby, after the averaging operation is triggered, the number of valid weights will decrease sharply as shown in Figure 2. To alleviate this problem, we introduce SNT-ASGD as following:
w̃i =
{ 0 if mi = 0,∀i,∑K
t=Ti wi,t (K−Ti+1) if mi = 1,∀i. (5)
where w̃i is the value returned by SNT-ASGD for weight wi; wi,t represents the actual value of weight wi at the tth iteration; mi = 1 if the weight wi exists and mi = 0 means that the weight wi does not exist; Ti is the iteration in which the weight wi grows most recently; and K is the total
number of iterations. We demonstrate the effectiveness of SNT-ASGD in Figure 2. At the beginning, trained with SGD, the number of weights with high magnitude increases fast. However, the trend starts to descend significantly once the optimization switches to NT-ASGD at the 80th epoch, whereas the trend of SNT-ASGD continues to rise after a small drop caused by the averaging operation.
To better understand how proposed components, cell weight redistribution and SNT-ASGD, improve the sparse RNN training performance, we further conduct an ablation study in Appendix A. It is clear to see that both of them lead to significant performance improvement.
4 EXPERIMENTAL RESULTS
We evaluate Selfish-RNN with various models including stacked LSTMs, RHNs, ON-LSTM on the Penn TreeBank dataset and AWD-LSTM-MoS on the WikiText-2 dataset. The performance of Selfish-RNN is compared with 5 state-of-the-art sparse inducing techniques, including Intrinsic Sparse Structures (ISS) (Wen et al., 2018), SET, DSR, SNFS, and RigL. ISS is a method to explore sparsity inside RNNs by using group Lasso regularization. We choose Adam (Kingma & Ba, 2014) optimizer for SET, DSR, SNFS, and RigL. We also evaluate our methods with two state-of-the-art RNN models, ON-LSTM on PTB and AWD-LSTM-MoS on Wikitext-2, as reported in Appendix D and Appendix E, respectively.
4.1 STACKED LSTMS
As introduced by Zaremba et al. (2014), stacked LSTMs (large) is a two-layer LSTM model with 1500 hidden units for each LSTM layer. We choose the same sparsity as ISS, 67% and 62%. We empirically found that 0.7 is a safe choice for the removing rate of stacked LSTMs. The clip norm is set to 0.25 and all models are trained for 100 epochs.
Results are shown in the left side of Table 2. To evaluate our sparse training method fairly, we also provide a new dense baseline trained with the standard NT-ASGD, achieving 6 lower test perplexity than the widely-used baseline. We also test whether a small dense network and a static sparse network
with the same number of parameters as Selfish-RNN can match the performance of Selfish-RNN. We train a dense stacked LSTMs with 700 hidden units, named as “Small”. In line with the previous studies (Mocanu et al., 2018; Mostafa & Wang, 2019; Evci et al., 2020), both static sparse networks and the small-dense network fail to match the performance of Selfish-RNN. Training a static sparse network from scratch with uniform distribution performs better than the one with ER distribution. Trained with Adam, all sparse training techniques fail to match the performance of ISS and dense models. Models trained with SNT-ASGD obtain substantially lower perplexity, and Selfish-RNN achieves the lowest one, even better than the new dense baseline with much fewer training costs.
To understand better the effect of different optimizers on different DST methods, we report the performance of all DST methods trained with Adam, momentum SGD, and SNT-ASGD. The learning rate of Adam is set as 0.001. The learning rate of momentum SGD is 2 decreased by a factor of 1.33 once the loss fails to decrease and the momentum coefficient is 0.9. The weight decay is set as 1.2e-6 for all optimizers. For SNFS (SNT-ASGD), we replace momentum of weights with their gradients, as SNT-ASGD does not involve any momentum terms. We use the same hyperparameters for all DST methods. The results are shown in Table 3. It is clear that SNT-ASGD brings significant perplexity improvements to all sparse training techniques. This further stands as empirical evidence that SNT-ASGD is crucial to improve the sparse training performance in the RNN setting. Moreover, compared with other DST methods, Selfish-RNN is quite robust to the choice of optimizers due to its simple scheme to update sparse connectivity. Advanced strategies such as across-layer weight redistribution used in DSR and SNFS, gradient-based weight growth used in RigL and SNFS heavily depend on optimizers. They might work decently for some optimization methods but may not work for others.
Additionally, note that different DST methods use different sparse distributions, leading to very different computational costs even with the same sparsity. We also report the approximated training and inference FLOPs for all methods. The FLOP gap between Selfish-RNN and RigL is very small, whereas SNFS requires more FLOPs than our method for both training and inference (see Appendix L for details on how FLOPs are calculated). ISS achieves a lower number of FLOPs, since it does not sparsify the embedding layer and therefore, their LSTM layers are much more sparse than LSTM layers obtained by other methods. This would cause a fewer number of FLOPs as LSTM layers typically require more FLOPs than other layers.
4.2 RECURRENT HIGHWAY NETWORKS
Recurrent Highway Networks (Zilly et al., 2017) is a variant of RNNs allowing RNNs to explore deeper architectures inside the recurrent transition. See Appendix C for experimental settings of RHN. The results are shown in the right side of Table 2. Selfish-RNN achieves better performance than the dense model with half FLOPs. Unlike the large FLOP discrepancy of stacked LSTMs, the FLOP gap between different sparse training techniques for RHNs is very small, except SNFS which requires computing dense momentum for each iteration. Additionally, ISS has similar FLOPs with Selfish-RNN for RHN, as it sparsifies the embedding layer as well.
4.3 ANALYZING THE PERFORMANCE OF SELFISH-RNN
Analysis of Evolutionary Trajectory of Sparse Connectivity. The fact that Selfish-RNN consistently achieves good performance with different runs naturally raises some questions: e.g., are final sparse connectivities obtained by different runs similar or very different? Is the distance between the original sparse connectivity and the final sparse connectivity large or small? To answer these questions, we investigate a method based on graph edit distance (GED) (Sanfeliu & Fu, 1983) to measure the topological distance between different sparse connectivities learned by different runs. The distance is scaled between 0 and 1. The smaller the distance is, the more similar the two sparse topologies are (See Appendix J for details on how we measure the sparse topological distance).
The results are demonstrated in Figure 3. Figure 3-left shows how the topology of one randominitialized network evolves when trained with Selfish-RNN. We compare the topological distance between the sparse connectivity obtained at the 5th epoch and the sparse connectivities obtained in the following epochs. We can see that the distance gradually increases from 0 to a very high value 0.8, meaning that Selfish-RNN optimizes the initial topology to a very different one after training. Moreover, Figure 3-right illustrates that the topological distance between two same-initialized networks trained with different seeds after the 4th epoch. We can see that starting from the same sparse topology, they evolve to completely different sparse connectivities. Note that even when leading to completely different sparse connectivities, different runs achieve similarly good performance, which indicates that in the case of RNNs there exist many good local optima in terms of sparse connectivity that can have equally good performance. This phenomenon complements the findings of Liu et al. (2020c) which show that there are numerous sparse sub-networks performing similarly well in the context of MLPs.
Analysis of Sparse Initialization. We compare two types of sparse initialization, ER distribution and uniform distribution. Uniform distribution namely enforces the sparsity level of each layer to be the same as S. ER distribution allocates higher sparsity to larger layers than smaller ones. Note that its variant Erdős-Rényi-kernel proposed by Evci et al. (2020) scales back to ER for RNNs, as no kernels are involved. The results are shown as the Static group in Table 2. We can see that uniform distribution outperforms ER distribution consistently. Moreover, ER usually causes RNN layers to be less sparse than other layers, resulting in a small increase of FLOPs.
Analysis of Growth Methods. Methods that leverage gradient-based weight growth (SNFS and RigL) have shown superiority over the methods using random-based weight growth for CNNs. However, we observe a different behavior with RNNs. We set up a controlled experiment to compare these two methods with SNT-ASGD and momentum SGD. We report the results with various update intervals (the number of iterations between sparse connectivity updates) in Figure 4. Surprisingly, gradient-based growth performs worse than random-based growth in most cases. And there is an increased performance gap as the update interval increases. Our hypothesis is that random growth helps in exploring better the search space, as it naturally considers a large number of various sparse connectivities during training, which is crucial to the performance of dynamic sparse training. Differently, gradient growth drives the network topology towards some similar local optima for the sparse connectivity as it uses a greedy search strategy (highest gradient magnitude) at every topological change. However, benefits provided by high-magnitude gradients might change dynamically afterwards due to complicated interactions between weights. We empirically illustrate our hypothesis via the proposed distance measure between sparse connectivities in Appendix K.
Analysis of Hyper-parameters. The sparsity S and the initial removing rate p are two hyperparameters of our method. We show their sensitivity analysis in Appendix F and Appendix G. We find that Selfish Stacked LSTMs, RHNs, ON-LSTM, and AWD-LSTM-MoS need around 25%, 40%, 45%, and 40% parameters to reach the performance of their dense counterparts, respectively. And our method is quite robust to the choice of the initial removing rate.
5 CONCLUSION
In this paper, we proposed an approach to train sparse RNNs from scratch with a fixed parameter count throughout training. Further, we introduced SNT-ASGD, a specially designed sparse optimizer for training sparse RNNs and we showed that it substantially improves the performance of all dynamic sparse training methods in RNNs. We observed that random-based growth achieves lower perplexity than gradient-based growth in the case of RNNs. Further, we developed an approach to compare two different sparse connectivities from the perspective of graph theory. Using this approach, we found that random-based growth explores better the topological search space for optimal sparse connectivities, whereas gradient-based growth is prone to drive the network towards similar sparse connectivity patterns. opening the path for a better understanding of sparse training.
A ABLATION STUDY
To verify if the improvement shown above is caused by the cell weight redistribution or the Sparse NT-ASGD, we conduct an ablation study for all architectures. To avoid distractive factors, all models use the same hyper-parameters with the ones reported in the paper. And the use of finetuning is not excluded. We present the validation and testing perplexity for variants of our model without these two contributions, as shown in Table 4. Not surprisingly, removing either of these two novelties degrades the performance. There is a significant degradation in the performance for all models, up to 13 perplexity point, if the optimizer switches to the standard NT-ASGD. This stands as empirical evidence regarding the benefit of SNT-ASGD. Without cell weight redistribution, the testing perplexity also rises. The only exception is RHN whose number of redistributed weights in each layer is only two. This empirically shows that cell weight redistribution is more effective for the models with more cell weights.
B COMPARISON OF DIFFERENT CELL WEIGHT REDISTRIBUTION METHODS
In Table 5, we conduct a small experiment to compare different methods of cell weight redistribution with stacked LSTMs, including redistributing based on the mean value of the magnitude of nonzero weights from different cell weights and the mean value of the gradient magnitude of nonzero weights.
C EXPERIMENTAL DETAILS FOR RHN
Recurrent Highway Networks (Zilly et al., 2017) is a variant of RNNs allowing RNNs to explore deeper architecture inside the recurrent transition. Instead of stacking recurrent layers directly, RHN stacks multiple highway layers on top of recurrent state transition. Within each highway layer, free weights are redistributed across the input weight and the state weight. The sparsity level is set the same as ISS, 67.7% and 52.8%. Dropout rates are set to be 0.20 for the embedding layer, 0.65 for the input, 0.25 for the hidden units, and 0.65 for the output layer. The model is trained for 500 epochs with a learning rate of 15, a batch size of 20, and a sequence length to of 35. At the end of each training epoch, new weights are redistributed across the weights of the H nonlinear transform and the T gate.
D EXPERIMENTAL RESULTS WITH ON-LSTM
Table 6: Single model perplexity on validation and test sets for the Penn Treebank language modeling task with ON-LSTM. Methods with “ASGD” are trained with SNT-ASGD. The numbers reported are averaged over five runs.
Models #Param Val Test
Dense1000 25M 58.29± 0.10 56.17± 0.12 Dense1300 25M 58.55± 0.11 56.28± 0.19 SET 11.3M 65.90± 0.08 63.56± 0.14 DSR 11.3M 65.22± 0.07 62.55± 0.06 SNFS 11.3M 68.00± 0.10 65.52± 0.15 RigL 11.3M 64.41± 0.05 62.01± 0.13 RigL1000 (ASGD) 11.3M 59.17± 0.08 57.23± 0.09 RigL1300 (ASGD) 11.3M 59.10± 0.05 57.44± 0.15 Selfish-RNN1000 11.3M 58.17± 0.06 56.31± 0.10 Selfish-RNN1300 11.3M 57.67 ± 0.03 55.82 ± 0.11
Table 7: Single model perplexity on validation and test sets for the WikiText-2 language modeling task with AWD-LSTMMoS. Baseline is AWD-LSTM-MoS obtained from Yang et al. (2018). Methods with “ASGD” are trained with SNTASGD.
Models #Param Val Test
Dense 35M 66.01 63.33 SET 15.6M 72.82 69.61 DSR 15.6M 69.95 66.93 SNFS 15.6M 79.97 76.18 RigL 15.6M 71.36 68.52 RigL (ASGD) 15.6M 68.84 65.18
Selfish-RNN 15.6M 65.96 63.05
Proposed by Shen et al. (2019) recently, ON-LSTM can learn the latent tree structure of natural language by learning the order of neurons. For a fair comparison, we use exactly the same model hyper-parameters and regularization used in ON-LSTM. We set the sparsity of each layer to 55% and the initial removing rate to 0.5. We train the model for 1000 epochs and rerun SNT-ASGD as a fine-tuning step once at the 500th epoch, dubbed as Selfish-RNN1000. As shown in Table 6, Selfish-RNN outperforms the dense model while reducing the model size to 11.3M. Without SNT-ASGD, sparse training techniques can not reduce the test perplexity to 60. SNT-ASGD is able to improve the performance of RigL by 5 perplexity. Moreover, one interesting observation is that one of the regularizations used in the standard ON-LSTM, DropConnect, is perfectly compatible with our method, although it also drops the hidden-to-hidden weights out randomly during training.
In our experiments we observe that Selfish-RNN benefits significantly from the second fine-tuning operation. We scale the learning schedule to 1300 epochs with two fine-tuning operations after 500 and 1000 epochs, respectively, dubbed as Selfish-RNN1300. It is interesting that Selfish-RNN1300 can achieve lower testing perplexity after the second fine-tuning step, whereas the dense model Dense1300 can not even reach again the perplexity that it had before the second fine-tuning. The heuristic explanation here is that our method helps the optimization escape the local optima or a local saddle point by optimizing the sparse structure, while for dense models whose energy landscape is fixed, it is very difficult for the optimizer to find its way off the saddle point or the local optima.
E EXPERIMENTAL RESULTS WITH AWD-LSTM-MOS
We also evaluate Selfish-RNN on the WikiText-2 dataset. The model we choose is AWD-LSTM-MoS (Yang et al., 2018), which is the state-of-the-art RNN-based language model. It replaces Softmax with Mixture of Softmaxes (MoS) to alleviate the Softmax bottleneck issue in modeling natural language. For a fair comparison, we exactly follow the model hyper-parameters and regularization used in AWD-LSTM-MoS. We sparsify all layers with 55% sparsity except for the prior layer as its number of parameters is negligible. We train our model for 1000 epochs without finetuning or dynamical evaluation (Krause et al., 2018) to simply show the effectiveness of our method. As demonstrated in Table 7. Selfish AWD-LSTM-MoS can reach dense performance with 15.6M parameters.
F EFFECT OF SPARSITY
There is a trade-off between the sparsity level S and the test perplexity of Selfish-RNN. When there are too few parameters, the sparse neural network will not have enough capacity to model the data. If the sparsity level is too small, the training acceleration will be small. Here, we analyze this trade-off by varying the sparsity level while keeping the other experimental setup the same, as shown in
Figure 5a. We find that Selfish Stacked LSTMs, RHNs, ON-LSTM, and AWD-LSTM-MoS need around 25%, 40%, 45%, and 40% parameters to reach the performance of their dense counterparts, respectively. Generally, the performance of sparsified models is decreasing as the sparsity level increases.
G EFFECT OF INITIAL REMOVING RATE
The initial removing rate p determines the number of removed weights at each connectivity update. We study the performance sensitivity of our algorithm to the initial removing rate p by varying it ∈ [0.3, 0.5, 0.7]. We set the sparsity level of each model as the one having the best performance in Figure 5a. Results are shown in Figure 5b. We can clearly see that our method is very robust to the choice of the initial removing rate.
H DIFFERENCE AMONG SET, SELFISH-RNN AND ITERATIVE PRUNING METHODS
The topology update strategy of Selfish-RNN differs from SET in several important features. (1) we automatically redistribute weights across cell weights for better regularization, (2) we use magnitudebased removal instead of removing a fraction of the smallest positive weights and the largest negative weights, (3) we use uniform initialization rather than non-uniform sparse distribution like ER or ERK, as it consistently achieves better performance. Additionally, the optimizer proposed in this work, SNT-ASGD, brings substantial perplexity improvement to the sparse RNN training.
Figure 6-left illustrates a high-level overview from an efficiency perspective of the difference between Selfish-RNN and iterative pruning techniques (Han et al., 2016; Zhu & Gupta, 2017; Frankle & Carbin, 2019). The conventional pruning and re-training techniques usually involve three steps: (1) pre-training a dense model, (2) pruning unimportant weights, and (3) re-training the pruned model to improve performance. The pruning and re-training cycles can be iterated. This iteration is taking place at least once, but it may also take place several times depending on the specific algorithms used. Therefore, the sparse networks obtained via iterative pruning at least involve pre-training a dense model. Different from the aforementioned three-step techniques, FLOPs required by Selfish-RNN is proportional to the density of the model, as it allows us to train a sparse network with a fixed number of parameters throughout training in one single run, without any re-training phases. Moreover, the overhead caused by the adaptive sparse connectivity operation is negligible, as it is operated only once per epoch.
I COMPARISON BETWEEN SELFISH-RNN AND PRUNING
It has been shown by Evci et al. (2020) that while state-of-the-art sparse training method (RigL) achieves promising performance in terms of CNNs, it fails to match the performance of pruning in RNNs. Given the fact that magnitude pruning has become a widely-used and strong baseline for model compression, we also report a comparison between Selfish-RNN and iterative magnitude pruning with stacked LSTMs. The pruning baseline here is the Tensorflow Model Pruning library (Zhu & Gupta, 2017). The results are demonstrated in Figure 6-right.
We can see that Selfish-RNN exceeds the performance of pruning in most cases. An interesting phenomenon is that, with increased sparsity, we see a decreased performance gap between SelfishRNN and pruning. Especially, Selfish-RNN performs worse than pruning when the sparsity level is 95%. This can be attributed to the poor trainability problem of sparse models with extreme sparsity levels. Noted in Lee et al. (2020), the extreme sparse structure can break dynamical isometry (Saxe et al., 2014) of sparse networks, which degrades the trainability of sparse neural networks. Different from sparse training methods, pruning operates from a dense network and thus, does not have this problem.
J SPARSE TOPOLOGY DISTANCE MEASUREMENT
Our sparse topology distance measurement considers the unit alignment based on a semi-matching technique introduced by Li et al. (2016) and a graph distance measurement based on graph edit distance (GED) (Sanfeliu & Fu, 1983). More specifically, our measurement includes the following steps:
Step 1: We train two sparse networks with dynamic sparse training on the training dataset and store the sparse topology after each epoch. Let Wil be the set of sparse topologies for the l-th layer of network i.
Step 2: Using the saved model, we compute the activity output on the test data, Oil ∈ Rn×m, where n is the number of hidden units and m is the number of samples.
Step 3: We leverage the activity units of each layer to pair-wisely match topologies Wil . We achieve unit matching between a pair of networks by finding the unit in one network with the maximum correlation to the one in the other network.
Step 4: After alignment, we apply graph edit distance (GED) to measure the similarity between pairwise Wil . Eventually, the distance is scaled to lie between 0 and 1. The smaller the distance is, the more similar the two sparse topologies are.
Here, We choose stacked LSTMs on PTB dataset as a specific case to analyze. Specifically, we train two stacked LSTMs for 100 epochs with different random seeds. We choose a relatively small removing rate of 0.1. We start alignment at the 5th epoch to ensure a good alignment result, as at the very beginning of training networks do not learn very well. We then use the matched order of output tensors to align the pairwise topologies Wil .
K TOPOLOGICAL DISTANCE OF GROWTH METHODS
In this section, we empirically illustrate that gradient growth drives different networks into some similar connectivity patterns based on the proposed distance measurement between sparse connectivities. The initial removing rates are set as 0.1 for all training runs in this section. First, we measure the topological distance between two different training runs trained with gradient growth and random growth, respectively, as shown in Figure 7. We can see that, starting with very different sparse connectivity topologies, two networks trained with random growth end up at the same distance, whereas the topological distance between networks trained with gradient growth is continuously decreasing and this tendency is likely to continue as the training goes on. We further report the distance between two networks with same initialization but different training seeds when trained with gradient growth and random growth, respectively. As shown in Figure 8, the distance between sparse networks discovered by gradient growth is smaller than the distance between sparse networks discovered by random growth. These observations are in line with our hypothesis that gradient growth drives networks into some similar structures, whereas random growth explores more sparse structures spanned over the dense networks.
L FLOPS ANALYSIS OF DIFFERENT APPROACHES
We follow the way of calculating training FLOPs layer by layer based on sparsity level sl, proposed by Evci et al. (2020). We split the process of training a sparse recurrent neural network into two steps: forward pass and backward pass.
Forward pass In order to calculate the loss of the current models given a batch of input data, the output of each layer is needed to be calculated based on a linear transformation and a non-linear activation function. Within each RNN layer, different cell weights are used to regulate information in sequence using the output of the previous time step and the input of this time step.
Backward pass In order to update weights, during the backward pass, each layer calculates 2 quantities: the gradient of the loss function with respect to the activations of the previous layer and the gradient of the loss function with respect to its own weights. Therefore, the computational expense of backward pass is twice that of forward pass. Given that RNN models usually contain an embedding layer from which it is very efficient to pick a word vector, for models not using weight tying, we
only count the computations to calculate the gradient of its parameters as the training FLOPs and we omit its inference FLOPs. For models using weight tying, both the training FLOPs and the inference FLOPs are omitted.
Given a specific architecture, we denote fD as dense FLOPs required to finish one training iteration and fS as the corresponding sparse FLOPs (fS ≈ (1− S)fD), where S is the sparsity level. Thus fS fD for very sparse networks. Since different sparse training methods cause different sparse distribution, their FLOPs fS are also different from each other. We omit the FLOPs used to update the sparse connectivity, as it is only performed once per epoch. Overall, the total FLOPs required for one training update on one single sample are given in Table 8. The training FLOPs of dense-to-sparse methods like, ISS and pruning, are 3fD ∗ st, where st is the sparsity of the model at iteration t. Since dense-to-sparse methods require to train a dense model for a while, their training FLOPs and memory requirement are higher than our method. For methods that allow the sparsity of each layer dynamically changing e.g., DSR and SNFS, we approximate their training FLOPs via their final distribution, as their sparse distribution converge to the final distribution in the first few epochs. ER distribution causes a bit more inference FLOPs than uniform distribution because is allocates more weights to the RNN layers than other layers. SNFS requires extra FLOPs to calculate dense gradients during the backward pass. Although RigL also uses the dense gradients to assist weight growth, it only needs to calculate dense gradients every ∆T iterations, thus its averaged FLOPs is given by 3fS∆T+2fS+fD ∆T+1 . Here, we simply omit the extra FLOPs required by gradient-based growth, as it is negligible compared with the whole training FLOPs.
For inference, we calculate the inference FLOPs on single sample based on the final sparse distribution learned by different methods.
M FINAL CELL WEIGHT SPARSITY BREAKDOWN
We further study the final sparsity level across cell weights learned automatically by our method. We find a consistent observation that the weight of forget gates, either the forget gate in the standard LSTM or the master forget gate in ON-LSTM, tend to be sparser than the weight of other gates, whereas the weight of cell gates and output gates are denser than the average, as shown in Figure 9. However, there is no big difference between weights in RHN, although the H nonlinear transform weight is slightly sparser than the T gate weight in most RHN layers. This phenomenon is in line with the Ablation analysis where the cell weight redistribution does not provide performance improvement for RHNs. Cell weight redistribution is more important for models with more regulating weights.
N LIMITATION
The aforementioned training benefits have not been fully explored, as off-the-shelf software and hardware have limited support for sparse operations. The unstructured sparsity is difficult to be efficiently mapped to the existing parallel processors. The results of our paper provide motivation for new types of hardware accelerators and libraries with better support for sparse neural networks. Nevertheless, many recent works have been developed to accelerate sparse neural networks including Gray et al. (2017); Moradi et al. (2019); Ma et al. (2019); Yang & Ma (2019); Liu et al. (2020b). For instance, NVIDIA introduces the A100 GPU enabling the Fine-Grained Structured Sparsity (NVIDIA, 2020). The sparse structure is enforced by allowing two nonzero values in every four-entry vector to reduce memory storage and bandwidth by almost 2×. We do not claim that Selfish-RNN is the best way to obtain sparse recurrent neural networks, but simply highlights that it is an important future research direction to develop more efficient hardware and software to benefit from sparse neural networks. | 1. What is the main contribution of the paper regarding sparse training in RNNs?
2. What are the strengths and weaknesses of the proposed approach compared to existing methods?
3. Do you have any questions or concerns about the methodology, particularly in magnitude weight removal and cell weight redistribution?
4. How does the reviewer assess the clarity and accuracy of the paper's content?
5. Are there any suggestions or ideas for future work related to this research? | Review | Review
The paper claims that the previous sparse training methods mainly focus on MLP and CNN, and fail to perform very well in RNNs. Hence, the authors proposed an approach to train sparse RNNs with a fixed FLOPs budget. The proposed technique is based on defining a mask matrix
M
and refining it during training. It is initialized randomly to have the desired sparsity level
S
. After each training epoch, a fraction
p
of the weights with the smallest magnitude is removed, i.e., those locations are zeroed in the mask M. Next, the same amount of parameters are randomly added to M again. Moreover, a variant of the averaged stochastic gradient optimizer (SNT-ASGD) is developed for the training of sparse RNN to account for the effect of weight masks during training. They showed that in practice, the requirements for efficient sparse training of RNNs are different than CNN and MLP.
Strengths: By adding some refinements and tweaks to the existing techniques (masking for sparse training and adapting the NT-ASGD), the authors were able to achieve good performance to train sparse RNNs. The paper has a rather extensive set of simulations and experimental setups to analyze the best setup which yields good sparse training, e.g., comparing uniform vs ER distribution for masks, sensitivity to hyperparameters, ... Moreover, they have considered a fairly diverse set of RNN architectures to evaluate their method.
Weaknesses and questions: Compared to the existing methods, the technical novelty of the paper is minor. It can be seen as some tweaks and improvements to the existing ones (although I admit that those changes are essential for the method to work for RNN.). What is special about the method that makes it specific to RNN? In other words, is it possible to use the same method for sparse training of MLP and CNN? A minor issue with the paper is the FLOPS analysis the authors used. Effectively, they use the sparsity of the parameters as a measure of FLOPS, not the actual FLOPS that might depend on the sparsity structure, HW, or software implementation. It would be a good idea to directly mention and use total sparsity, instead of FLOPS which can mislead the readers.
Some parts of the method are not clear enough, e.g.,
In the paper, it is stated that "magnitude weight removal" is applied to non-RNN layers. Do the authors mean that for the parameters of RNN, this step is skipped?
In "cell weight redistribution", it is suggested that the "magnitude weight removal" is applied to the whole set of RNN parameters
θ
1
,
…
,
θ
t
. However, in "random weight growth", it is mentioned that the same number of weights is grown immediately after weight removal, i.e.,
R
and
P
have the same number of 1's. So, does it mean that the number of 1's in mask
M
i
for each weight
θ
i
(
1
≤
i
≤
t
) remains fixed S during training?
Another aspect of training that is unclear for me is the parameters that are updated. Is
θ
updated during training or only
θ
s
is updated? As a result, if a weight is removed in one epoch and its value at the time of removal was
α
, and later regrown at another epoch, is its initial value set to 0 or started from its previous value before "weight removal", i.e.
α
?
Did the authors add any regularizer (e.g.,
ℓ
1
) to the training loss to improve sparsity in their experiments? |
ICLR | Title
Selfish Sparse RNN Training
Abstract
Sparse neural networks have been widely applied to reduce the necessary resource requirements to train and deploy over-parameterized deep neural networks. For inference acceleration, methods that induce sparsity from a pre-trained dense network (dense-to-sparse) work effectively. Recently, dynamic sparse training (DST) has been proposed to train sparse neural networks without pre-training a large and dense network (sparse-to-sparse), so that the training process can also be accelerated. However, previous sparse-to-sparse methods mainly focus on Multilayer Perceptron Networks (MLPs) and Convolutional Neural Networks (CNNs), failing to match the performance of dense-to-sparse methods in Recurrent Neural Networks (RNNs) setting. In this paper, we propose an approach to train sparse RNNs with a fixed parameter count in one single run, without compromising performance. During training, we allow RNN layers to have a non-uniform redistribution across cell weights for a better regularization. Further, we introduce SNT-ASGD, a variant of the averaged stochastic gradient optimizer, which significantly improves the performance of all sparse training methods for RNNs. Using these strategies, we achieve state-of-the-art sparse training results, even better than dense model results, with various types of RNNs on Penn TreeBank and Wikitext-2 datasets.
1 INTRODUCTION
Recurrent neural networks (RNNs) (Elman, 1990), with a variant of long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997), have been highly successful in various fields, including language modeling (Mikolov et al., 2010), machine translation (Kalchbrenner & Blunsom, 2013), question answering (Hirschman et al., 1999; Wang & Jiang, 2017), etc. As a standard task to evaluate models’ ability to capture long-range context, language modeling has witnessed great progress in RNNs. Mikolov et al. (2010) demonstrated that RNNs perform much better than backoff models for language modeling. After that, various novel RNN architectures such as Recurrent Highway Networks (RHNs) (Zilly et al., 2017), Pointer Sentinel Mixture Models (Merity et al., 2017), Neural Cache Model (Grave et al., 2017), Mixture of Softmaxes (AWD-LSTM-MoS) (Yang et al., 2018), ordered neurons LSTM (ON-LSTM) (Shen et al., 2019), and effective regularization like variational dropout (Gal & Ghahramani, 2016), weight tying (Inan et al., 2017), DropConnect (Merity et al., 2018) have been proposed to significantly improve the performance of RNNs.
At the same time, as the performance of deep neural networks (DNNs) improves, the resources required to train and deploy deep models are becoming prohibitively large. To tackle this problem, various dense-to-sparse methods have been developed, including but not limited to pruning (LeCun et al., 1990; Han et al., 2015), Bayesian methods (Louizos et al., 2017a; Molchanov et al., 2017), distillation (Hinton et al., 2015), L1 Regularization (Wen et al., 2018), and low-rank decomposition (Jaderberg et al., 2014). Given a pre-trained model, these methods work effectively to accelerate the inference. Recently, some dynamic sparse training (DST) approaches (Mocanu et al., 2018; Mostafa & Wang, 2019; Dettmers & Zettlemoyer, 2019; Evci et al., 2020) have been proposed to bring efficiency for both, the training phase and the inference phase by dynamically changing the sparse connectivity during training. However, previous approaches are mainly for CNNs. For RNNs, the long-term dependencies and repetitive usage of recurrent cells make them more difficult to be sparsified (Kalchbrenner et al., 2018; Evci et al., 2020). More importantly, the state-of-the-art performance achieved by RNNs on language modeling is mainly associated with the optimizer, averaged stochastic gradient descent (ASGD) (Polyak & Juditsky, 1992), which is not compatible with the existing DST approaches. The above-mentioned problems heavily limit the performance
of the off-the-shelf sparse training methods in the RNN field. For instance, while “The Rigged Lottery” (RigL) achieves state-of-the-art sparse training results with various CNNs, it fails to match the performance of the iterative pruning method in the RNN setting (Evci et al., 2020). In this paper, we introduce an algorithm to train sparse RNNs with a fixed number of computational costs throughout training. We abbreviate our sparse RNN training method as Selfish-RNN because our method encourages cell weights to obtain their parameters selfishly. The main contributions of this work are five-fold:
• We propose an algorithm to train sparse RNNs from scratch with a fixed number of parameters. This advantage constrains the training costs to a fraction of the costs needed for training a dense model, allowing us to choose suitable sparsity levels for different types of training platforms. • We introduce SNT-ASGD, a sparse variant of the non-monotonically triggered averaged
stochastic gradient descent optimizer, which overcomes the over-sparsified problem of the original NT-ASGD (Merity et al., 2018) caused by dynamic sparse training. • We demonstrate state-of-the-art sparse training performance with various RNN models,
including stacked LSTMs (Zaremba et al., 2014), RHNs, ordered neurons LSTM (ONLSTM) on Penn TreeBank (PTB) dataset (Marcus et al., 1993) and AWD-LSTM-MoS on WikiText-2 dataset (Melis et al., 2018). • We present an approach to analyze the evolutionary trajectory of the sparse connectivity
optimized by dynamic sparse training from the perspective of graph. With this approach, we show that there exist many good structural local optima (sparse sub-networks having equally good performance) in RNNs, which can be found in an efficient and robust manner. • Our analysis shows two surprising phenomena in the setting of RNNs contrary to CNNs:
(1) random-based weight growth performs better than gradient-based weight growth, (2) uniform sparse distribution performs better than Erdős-Rényi (ER) sparse initialization. These results highlight the need to choose different sparse training methods for different architectures.
2 RELATED WORK
Dense-to-Sparse. There are a large amount of works operating on a dense network to yield a sparse network. We divide them into three categories based on the training cost in terms of memory and computation. (1) Iterative Pruning and Retraining. To the best of our knowledge, pruning was first proposed by Janowsky (1989) and Mozer & Smolensky (1989) to yield a sparse network from a pre-trained network. Recently, Han et al. (2015) brought it back to people’s attention based on the idea of iterative pruning and retraining with modern architectures. Some recent works were proposed to further reduce the number of iterative retraining e.g., Narang et al. (2017); Zhu & Gupta (2017). Frankle & Carbin (2019) proposed the Lottery Ticket Hypothesis showing that the sub-networks (“winning tickets”) obtained via iterative pruning combined with their “lucky” initialization can outperform the dense networks. Zhou et al. (2019) discovered that the sign of their initialization is the crucial factor that
makes the “winning tickets” work. Our work shows that there exists a much more efficient and robust way to find those “winning ticketts” without any special initialization. The aforementioned methods require at least the same training cost as training a dense model, sometimes even more, as a pre-trained dense model is involved. We compare our method with state-of-the-art pruning method proposed by Zhu & Gupta (2017) in Appendix I. With fewer training costs, our method is able to discover sparse networks that can achieve lower test perplexity than iterative pruning. (2) Learning Sparsity During Training. There are also some works attempting to learn the sparse networks during training. Louizos et al. (2017b) and Wen et al. (2018) are examples that gradually enforce the network weights to zero via L0 and L1 regularization, respectively. Dai et al. (2018) proposed a singular value decomposition (SVD) based method to accelerate the training process for LSTMs. Liu et al. (2020a) proposed Dynamic Sparse Training to discover sparse structure by learning binary masks associated with network weights. However, these methods start with a fully dense network, and hence are not memory efficient. (3) One-Shot Pruning. Some works aim to find sparse neural networks by pruning once prior to the main training phase based on some salience criteria, such as connection sensitivity (Lee et al., 2019), signal propagation, (Lee et al., 2020), and gradient signal preservation (Wang et al., 2020). These techniques can find sparse networks before the standard training, but at least one iteration of dense model needs to be trained to identify the sparse sub-networks, and therefore the pruning process is not applicable to memory-limited scenarios. Additionally, one-shot pruning generally cannot match the performance of dynamic sparse training, especially at extreme sparsity levels (Wang et al., 2020).
Sparse-to-Sparse. Recently, many works have emerged to train intrinsically sparse neural networks from scratch to obtain efficiency both for training and inference. (1) Static Sparse Training. Mocanu et al. (2016) introduced intrinsically sparse networks by exploring the scale-free and small-world topological properties in Restricted Boltzmann Machines. Later, some works expand static sparse training into CNNs based on expander graphs and show comparable performance (Prabhu et al., 2018; Kepner & Robinett, 2019). (2) Dynamic Sparse Training. Mocanu et al. (2018) introduced Sparse Evolutionary Training (SET) which initializes a sparse network and dynamically changes the sparse connectivity by a simple remove-and-regrow strategy. At the same time, DeepR (Bellec et al., 2018) trained very sparse networks by sampling the sparse connectivity based on a Bayesian posterior. The iterative configuration updates have been proved to converge to a stationary distribution. Mostafa & Wang (2019) introduced Dynamic Sparse Reparameterization (DSR) to train sparse neural networks while dynamically adjusting the sparsity levels of different layers. Sparse Networks from Scratch (SNFS) (Dettmers & Zettlemoyer, 2019) improved the sparse training performance by growing free weights according to their momentum. It requires extra computation and memory to update the dense momentum tensor for each iteration. Further, Evci et al. (2020) introduced RigL which activates weights with the highest magnitude gradients. This approach grows weights expected to receive gradients with high magnitudes, while amortizing a large number of memory requirements and computational cost caused by momentum. Due to the inherent limitations of deep learning software and hardware libraries, all of the above works simulate sparsity using a binary mask over weights. More recently, Liu et al. (2020b) proved the potentials of DST by developing for the first time an independent software framework to train very large truly sparse MLPs trained with SET. However, all these works mainly focus on CNNs and MLPs, and they are not designed to match state-of-the-art performance for RNNs.
We summarize the properties of all approaches compared in this paper in Table 1. Same with SET, our method can guarantee Backward Sparse, which does not require any extra information from the removed weights. Additionally, we discuss the differences among SET, pruning techniques, and our method in Appendix H.
3 SPARSE RNN TRAINING
Our sparse RNN training method is illustrated in Figure 1 with LSTM as a specific case of RNNs. Note that our method can be easily applied to any other RNN variants. The only difference is the number of cell weights. Before training, we randomly initialize each layer at the same sparsity (the fraction of zero-valued weights), so that the training costs are proportional to the dense model at the beginning. To explore more sparse structures, while to maintain a fixed sparsity level, we need to optimize the sparse connectivity together with the corresponding weights (a combinatorial optimization problem). We apply dynamic sparse connectivity and SNT-ASGD to handle this combinatorial optimization problem. The pseudocode of the full training procedure of our algorithm is shown in Algorithm 1.
3.1 DYNAMIC SPARSE CONNECTIVITY
We consider uniform sparse initialization, magnitude weight removal, random weight growth, cell weight redistribution together as main components of our dynamic sparse connectivity method, which can ensure a fixed number of parameters and a clear sparse backward pass, as discussed next. Notation. Given a dataset of N samples D = {(xi, yi)}Ni=1 and a network f(x; θ) parameterized by θ. We train the network to minimize the loss function ∑N i=1 L(f(xi; θ), yi). The basic mechanism of sparse neural networks is to use a fraction of parameters to reparameterize the whole network, while preserving the performance as much as possible. Hence, a sparse neural network can be denoted as fs(x; θs) with a sparsity level S = 1− ‖θs‖0‖θ‖0 , where ‖ · ‖0 is the `0-norm. Uniform Sparse Initialization. First, the network is uniformly initialized with a sparse distribution in which the sparsity level of each layer is the same S. More precisely, the network is initialized by:
θs = θ M (1)
where θ is a dense weight tensor initialized in a standard way; M is a binary tensor, in which nonzero elements are sampled uniformly based on the sparsity S; refers to the Hadamard product. Magnitude Weight Removal. For non-RNN layers, we use magnitude weight removal followed by random weight growth to update the sparse connectivity. We remove a fraction p of weights with the smallest magnitude after each training epoch. This step is performed by changing the binary tensor M , as follows: M =M − P (2) where P is a binary tensor with the same shape as M , in which the nonzero elements have the same indices with the top-p smallest-magnitude nonzero weights in θs, with ||P ||0 = p||M ||0. Random Weight Growth. To keep a fixed parameter count, we randomly grow the same number of weights immediately after weight removal, by:
M =M +R (3)
where R is a binary tensor where the nonzero elements are randomly located at the position of zero elements of M . We choose random growth to get rid of using any information of the non-existing weights, so that both feedforward and backpropagation are completely sparse. It is more desirable to have such pure sparse structures as it enables the possibility of conceiving in the future specialized hardware accelerators for sparse neural networks. Besides, our analysis of growth methods in Section 4.3 shows that random growth can explore more sparse structural degrees of freedom than gradient growth, which might be crucial to the sparse training. Cell Weight Redistribution. Our dynamic sparse connectivity differs from previous methods mainly in cell weight redistribution. For RNN layers, the naive approach is to sparsify all cell weight tensors independently at the same sparsity, as shown in Liu et al. (2019) which is a straightforward extension of applying SET to RNNs. Essentially, it is more desirable to redistribute new parameters to cell weight tensors dependently, as all cell weight tensors collaborate together to regulate information. Intuitively, we redistribute new parameters in a way that weight tensors containing more largemagnitude weights should have more parameters. Large-magnitude weights indicate that their loss
gradients are large and few oscillations occur. Thus, weight tensors with more large-magnitude connections should be reallocated with more parameters to accelerate training. Concretely, for each RNN layer l, we remove weights dependently given by an ascending sort:
Sortp(|θl1|, |θl2|, .., |θlt|) (4)
where {θl1, θl2, ..., θlt} are all weight tensors within each cell, and Sortp returns p indices of the smallest-magnitude weights. After weight removal, new parameters are uniformly grown to each weight tensor to implement our cell weight redistribution gradually. We also tried other approaches including the mean value of the magnitude of nonzero weights or the mean value of the gradient magnitude of nonzero weights, but our approach achieves the best performance, as shown in Appendix B. We further demonstrate the final sparsity breakdown of cell weights learned by our method in Appendix M and observe that weights of forget gates are consistently sparser than other weights for all models. Note that redistributing parameters across cell weight tensors does not change the FLOP counting, as the sparsity of each layer is not changed. In contrast, the across-layer weight redistribution used by DSR and SNFS affects the sparsity level of each layer. As a result, it will change the number of floating-point operations (FLOPs).
Similar with SNFS, We also decay the removing rate p to zero with a cosine annealing. We further use Eq. (1) to enforce the sparse structure before the forward pass and after the backward pass, so that the zero-valued weights will not contribute to the loss. And all the newly activated weights are initialized to zero.
3.2 SPARSE NON-MONOTONICALLY TRIGGERED ASGD
Non-monotonically Triggered ASGD (NT-ASGD) has been shown to achieve surprising performance with various RNNs (Merity et al., 2018; Yang et al., 2018; Shen et al., 2019). However, it becomes less appealing for sparse RNNs training. Unlike dense networks in which every parameter in the model is updated at each iteration, for sparse networks, the zero-valued weights remain zero when they are not activated. Once these zero-valued weights are activated, the original averaging operation of standard NT-ASGD will immediately bring them close to zero. Thereby, after the averaging operation is triggered, the number of valid weights will decrease sharply as shown in Figure 2. To alleviate this problem, we introduce SNT-ASGD as following:
w̃i =
{ 0 if mi = 0,∀i,∑K
t=Ti wi,t (K−Ti+1) if mi = 1,∀i. (5)
where w̃i is the value returned by SNT-ASGD for weight wi; wi,t represents the actual value of weight wi at the tth iteration; mi = 1 if the weight wi exists and mi = 0 means that the weight wi does not exist; Ti is the iteration in which the weight wi grows most recently; and K is the total
number of iterations. We demonstrate the effectiveness of SNT-ASGD in Figure 2. At the beginning, trained with SGD, the number of weights with high magnitude increases fast. However, the trend starts to descend significantly once the optimization switches to NT-ASGD at the 80th epoch, whereas the trend of SNT-ASGD continues to rise after a small drop caused by the averaging operation.
To better understand how proposed components, cell weight redistribution and SNT-ASGD, improve the sparse RNN training performance, we further conduct an ablation study in Appendix A. It is clear to see that both of them lead to significant performance improvement.
4 EXPERIMENTAL RESULTS
We evaluate Selfish-RNN with various models including stacked LSTMs, RHNs, ON-LSTM on the Penn TreeBank dataset and AWD-LSTM-MoS on the WikiText-2 dataset. The performance of Selfish-RNN is compared with 5 state-of-the-art sparse inducing techniques, including Intrinsic Sparse Structures (ISS) (Wen et al., 2018), SET, DSR, SNFS, and RigL. ISS is a method to explore sparsity inside RNNs by using group Lasso regularization. We choose Adam (Kingma & Ba, 2014) optimizer for SET, DSR, SNFS, and RigL. We also evaluate our methods with two state-of-the-art RNN models, ON-LSTM on PTB and AWD-LSTM-MoS on Wikitext-2, as reported in Appendix D and Appendix E, respectively.
4.1 STACKED LSTMS
As introduced by Zaremba et al. (2014), stacked LSTMs (large) is a two-layer LSTM model with 1500 hidden units for each LSTM layer. We choose the same sparsity as ISS, 67% and 62%. We empirically found that 0.7 is a safe choice for the removing rate of stacked LSTMs. The clip norm is set to 0.25 and all models are trained for 100 epochs.
Results are shown in the left side of Table 2. To evaluate our sparse training method fairly, we also provide a new dense baseline trained with the standard NT-ASGD, achieving 6 lower test perplexity than the widely-used baseline. We also test whether a small dense network and a static sparse network
with the same number of parameters as Selfish-RNN can match the performance of Selfish-RNN. We train a dense stacked LSTMs with 700 hidden units, named as “Small”. In line with the previous studies (Mocanu et al., 2018; Mostafa & Wang, 2019; Evci et al., 2020), both static sparse networks and the small-dense network fail to match the performance of Selfish-RNN. Training a static sparse network from scratch with uniform distribution performs better than the one with ER distribution. Trained with Adam, all sparse training techniques fail to match the performance of ISS and dense models. Models trained with SNT-ASGD obtain substantially lower perplexity, and Selfish-RNN achieves the lowest one, even better than the new dense baseline with much fewer training costs.
To understand better the effect of different optimizers on different DST methods, we report the performance of all DST methods trained with Adam, momentum SGD, and SNT-ASGD. The learning rate of Adam is set as 0.001. The learning rate of momentum SGD is 2 decreased by a factor of 1.33 once the loss fails to decrease and the momentum coefficient is 0.9. The weight decay is set as 1.2e-6 for all optimizers. For SNFS (SNT-ASGD), we replace momentum of weights with their gradients, as SNT-ASGD does not involve any momentum terms. We use the same hyperparameters for all DST methods. The results are shown in Table 3. It is clear that SNT-ASGD brings significant perplexity improvements to all sparse training techniques. This further stands as empirical evidence that SNT-ASGD is crucial to improve the sparse training performance in the RNN setting. Moreover, compared with other DST methods, Selfish-RNN is quite robust to the choice of optimizers due to its simple scheme to update sparse connectivity. Advanced strategies such as across-layer weight redistribution used in DSR and SNFS, gradient-based weight growth used in RigL and SNFS heavily depend on optimizers. They might work decently for some optimization methods but may not work for others.
Additionally, note that different DST methods use different sparse distributions, leading to very different computational costs even with the same sparsity. We also report the approximated training and inference FLOPs for all methods. The FLOP gap between Selfish-RNN and RigL is very small, whereas SNFS requires more FLOPs than our method for both training and inference (see Appendix L for details on how FLOPs are calculated). ISS achieves a lower number of FLOPs, since it does not sparsify the embedding layer and therefore, their LSTM layers are much more sparse than LSTM layers obtained by other methods. This would cause a fewer number of FLOPs as LSTM layers typically require more FLOPs than other layers.
4.2 RECURRENT HIGHWAY NETWORKS
Recurrent Highway Networks (Zilly et al., 2017) is a variant of RNNs allowing RNNs to explore deeper architectures inside the recurrent transition. See Appendix C for experimental settings of RHN. The results are shown in the right side of Table 2. Selfish-RNN achieves better performance than the dense model with half FLOPs. Unlike the large FLOP discrepancy of stacked LSTMs, the FLOP gap between different sparse training techniques for RHNs is very small, except SNFS which requires computing dense momentum for each iteration. Additionally, ISS has similar FLOPs with Selfish-RNN for RHN, as it sparsifies the embedding layer as well.
4.3 ANALYZING THE PERFORMANCE OF SELFISH-RNN
Analysis of Evolutionary Trajectory of Sparse Connectivity. The fact that Selfish-RNN consistently achieves good performance with different runs naturally raises some questions: e.g., are final sparse connectivities obtained by different runs similar or very different? Is the distance between the original sparse connectivity and the final sparse connectivity large or small? To answer these questions, we investigate a method based on graph edit distance (GED) (Sanfeliu & Fu, 1983) to measure the topological distance between different sparse connectivities learned by different runs. The distance is scaled between 0 and 1. The smaller the distance is, the more similar the two sparse topologies are (See Appendix J for details on how we measure the sparse topological distance).
The results are demonstrated in Figure 3. Figure 3-left shows how the topology of one randominitialized network evolves when trained with Selfish-RNN. We compare the topological distance between the sparse connectivity obtained at the 5th epoch and the sparse connectivities obtained in the following epochs. We can see that the distance gradually increases from 0 to a very high value 0.8, meaning that Selfish-RNN optimizes the initial topology to a very different one after training. Moreover, Figure 3-right illustrates that the topological distance between two same-initialized networks trained with different seeds after the 4th epoch. We can see that starting from the same sparse topology, they evolve to completely different sparse connectivities. Note that even when leading to completely different sparse connectivities, different runs achieve similarly good performance, which indicates that in the case of RNNs there exist many good local optima in terms of sparse connectivity that can have equally good performance. This phenomenon complements the findings of Liu et al. (2020c) which show that there are numerous sparse sub-networks performing similarly well in the context of MLPs.
Analysis of Sparse Initialization. We compare two types of sparse initialization, ER distribution and uniform distribution. Uniform distribution namely enforces the sparsity level of each layer to be the same as S. ER distribution allocates higher sparsity to larger layers than smaller ones. Note that its variant Erdős-Rényi-kernel proposed by Evci et al. (2020) scales back to ER for RNNs, as no kernels are involved. The results are shown as the Static group in Table 2. We can see that uniform distribution outperforms ER distribution consistently. Moreover, ER usually causes RNN layers to be less sparse than other layers, resulting in a small increase of FLOPs.
Analysis of Growth Methods. Methods that leverage gradient-based weight growth (SNFS and RigL) have shown superiority over the methods using random-based weight growth for CNNs. However, we observe a different behavior with RNNs. We set up a controlled experiment to compare these two methods with SNT-ASGD and momentum SGD. We report the results with various update intervals (the number of iterations between sparse connectivity updates) in Figure 4. Surprisingly, gradient-based growth performs worse than random-based growth in most cases. And there is an increased performance gap as the update interval increases. Our hypothesis is that random growth helps in exploring better the search space, as it naturally considers a large number of various sparse connectivities during training, which is crucial to the performance of dynamic sparse training. Differently, gradient growth drives the network topology towards some similar local optima for the sparse connectivity as it uses a greedy search strategy (highest gradient magnitude) at every topological change. However, benefits provided by high-magnitude gradients might change dynamically afterwards due to complicated interactions between weights. We empirically illustrate our hypothesis via the proposed distance measure between sparse connectivities in Appendix K.
Analysis of Hyper-parameters. The sparsity S and the initial removing rate p are two hyperparameters of our method. We show their sensitivity analysis in Appendix F and Appendix G. We find that Selfish Stacked LSTMs, RHNs, ON-LSTM, and AWD-LSTM-MoS need around 25%, 40%, 45%, and 40% parameters to reach the performance of their dense counterparts, respectively. And our method is quite robust to the choice of the initial removing rate.
5 CONCLUSION
In this paper, we proposed an approach to train sparse RNNs from scratch with a fixed parameter count throughout training. Further, we introduced SNT-ASGD, a specially designed sparse optimizer for training sparse RNNs and we showed that it substantially improves the performance of all dynamic sparse training methods in RNNs. We observed that random-based growth achieves lower perplexity than gradient-based growth in the case of RNNs. Further, we developed an approach to compare two different sparse connectivities from the perspective of graph theory. Using this approach, we found that random-based growth explores better the topological search space for optimal sparse connectivities, whereas gradient-based growth is prone to drive the network towards similar sparse connectivity patterns. opening the path for a better understanding of sparse training.
A ABLATION STUDY
To verify if the improvement shown above is caused by the cell weight redistribution or the Sparse NT-ASGD, we conduct an ablation study for all architectures. To avoid distractive factors, all models use the same hyper-parameters with the ones reported in the paper. And the use of finetuning is not excluded. We present the validation and testing perplexity for variants of our model without these two contributions, as shown in Table 4. Not surprisingly, removing either of these two novelties degrades the performance. There is a significant degradation in the performance for all models, up to 13 perplexity point, if the optimizer switches to the standard NT-ASGD. This stands as empirical evidence regarding the benefit of SNT-ASGD. Without cell weight redistribution, the testing perplexity also rises. The only exception is RHN whose number of redistributed weights in each layer is only two. This empirically shows that cell weight redistribution is more effective for the models with more cell weights.
B COMPARISON OF DIFFERENT CELL WEIGHT REDISTRIBUTION METHODS
In Table 5, we conduct a small experiment to compare different methods of cell weight redistribution with stacked LSTMs, including redistributing based on the mean value of the magnitude of nonzero weights from different cell weights and the mean value of the gradient magnitude of nonzero weights.
C EXPERIMENTAL DETAILS FOR RHN
Recurrent Highway Networks (Zilly et al., 2017) is a variant of RNNs allowing RNNs to explore deeper architecture inside the recurrent transition. Instead of stacking recurrent layers directly, RHN stacks multiple highway layers on top of recurrent state transition. Within each highway layer, free weights are redistributed across the input weight and the state weight. The sparsity level is set the same as ISS, 67.7% and 52.8%. Dropout rates are set to be 0.20 for the embedding layer, 0.65 for the input, 0.25 for the hidden units, and 0.65 for the output layer. The model is trained for 500 epochs with a learning rate of 15, a batch size of 20, and a sequence length to of 35. At the end of each training epoch, new weights are redistributed across the weights of the H nonlinear transform and the T gate.
D EXPERIMENTAL RESULTS WITH ON-LSTM
Table 6: Single model perplexity on validation and test sets for the Penn Treebank language modeling task with ON-LSTM. Methods with “ASGD” are trained with SNT-ASGD. The numbers reported are averaged over five runs.
Models #Param Val Test
Dense1000 25M 58.29± 0.10 56.17± 0.12 Dense1300 25M 58.55± 0.11 56.28± 0.19 SET 11.3M 65.90± 0.08 63.56± 0.14 DSR 11.3M 65.22± 0.07 62.55± 0.06 SNFS 11.3M 68.00± 0.10 65.52± 0.15 RigL 11.3M 64.41± 0.05 62.01± 0.13 RigL1000 (ASGD) 11.3M 59.17± 0.08 57.23± 0.09 RigL1300 (ASGD) 11.3M 59.10± 0.05 57.44± 0.15 Selfish-RNN1000 11.3M 58.17± 0.06 56.31± 0.10 Selfish-RNN1300 11.3M 57.67 ± 0.03 55.82 ± 0.11
Table 7: Single model perplexity on validation and test sets for the WikiText-2 language modeling task with AWD-LSTMMoS. Baseline is AWD-LSTM-MoS obtained from Yang et al. (2018). Methods with “ASGD” are trained with SNTASGD.
Models #Param Val Test
Dense 35M 66.01 63.33 SET 15.6M 72.82 69.61 DSR 15.6M 69.95 66.93 SNFS 15.6M 79.97 76.18 RigL 15.6M 71.36 68.52 RigL (ASGD) 15.6M 68.84 65.18
Selfish-RNN 15.6M 65.96 63.05
Proposed by Shen et al. (2019) recently, ON-LSTM can learn the latent tree structure of natural language by learning the order of neurons. For a fair comparison, we use exactly the same model hyper-parameters and regularization used in ON-LSTM. We set the sparsity of each layer to 55% and the initial removing rate to 0.5. We train the model for 1000 epochs and rerun SNT-ASGD as a fine-tuning step once at the 500th epoch, dubbed as Selfish-RNN1000. As shown in Table 6, Selfish-RNN outperforms the dense model while reducing the model size to 11.3M. Without SNT-ASGD, sparse training techniques can not reduce the test perplexity to 60. SNT-ASGD is able to improve the performance of RigL by 5 perplexity. Moreover, one interesting observation is that one of the regularizations used in the standard ON-LSTM, DropConnect, is perfectly compatible with our method, although it also drops the hidden-to-hidden weights out randomly during training.
In our experiments we observe that Selfish-RNN benefits significantly from the second fine-tuning operation. We scale the learning schedule to 1300 epochs with two fine-tuning operations after 500 and 1000 epochs, respectively, dubbed as Selfish-RNN1300. It is interesting that Selfish-RNN1300 can achieve lower testing perplexity after the second fine-tuning step, whereas the dense model Dense1300 can not even reach again the perplexity that it had before the second fine-tuning. The heuristic explanation here is that our method helps the optimization escape the local optima or a local saddle point by optimizing the sparse structure, while for dense models whose energy landscape is fixed, it is very difficult for the optimizer to find its way off the saddle point or the local optima.
E EXPERIMENTAL RESULTS WITH AWD-LSTM-MOS
We also evaluate Selfish-RNN on the WikiText-2 dataset. The model we choose is AWD-LSTM-MoS (Yang et al., 2018), which is the state-of-the-art RNN-based language model. It replaces Softmax with Mixture of Softmaxes (MoS) to alleviate the Softmax bottleneck issue in modeling natural language. For a fair comparison, we exactly follow the model hyper-parameters and regularization used in AWD-LSTM-MoS. We sparsify all layers with 55% sparsity except for the prior layer as its number of parameters is negligible. We train our model for 1000 epochs without finetuning or dynamical evaluation (Krause et al., 2018) to simply show the effectiveness of our method. As demonstrated in Table 7. Selfish AWD-LSTM-MoS can reach dense performance with 15.6M parameters.
F EFFECT OF SPARSITY
There is a trade-off between the sparsity level S and the test perplexity of Selfish-RNN. When there are too few parameters, the sparse neural network will not have enough capacity to model the data. If the sparsity level is too small, the training acceleration will be small. Here, we analyze this trade-off by varying the sparsity level while keeping the other experimental setup the same, as shown in
Figure 5a. We find that Selfish Stacked LSTMs, RHNs, ON-LSTM, and AWD-LSTM-MoS need around 25%, 40%, 45%, and 40% parameters to reach the performance of their dense counterparts, respectively. Generally, the performance of sparsified models is decreasing as the sparsity level increases.
G EFFECT OF INITIAL REMOVING RATE
The initial removing rate p determines the number of removed weights at each connectivity update. We study the performance sensitivity of our algorithm to the initial removing rate p by varying it ∈ [0.3, 0.5, 0.7]. We set the sparsity level of each model as the one having the best performance in Figure 5a. Results are shown in Figure 5b. We can clearly see that our method is very robust to the choice of the initial removing rate.
H DIFFERENCE AMONG SET, SELFISH-RNN AND ITERATIVE PRUNING METHODS
The topology update strategy of Selfish-RNN differs from SET in several important features. (1) we automatically redistribute weights across cell weights for better regularization, (2) we use magnitudebased removal instead of removing a fraction of the smallest positive weights and the largest negative weights, (3) we use uniform initialization rather than non-uniform sparse distribution like ER or ERK, as it consistently achieves better performance. Additionally, the optimizer proposed in this work, SNT-ASGD, brings substantial perplexity improvement to the sparse RNN training.
Figure 6-left illustrates a high-level overview from an efficiency perspective of the difference between Selfish-RNN and iterative pruning techniques (Han et al., 2016; Zhu & Gupta, 2017; Frankle & Carbin, 2019). The conventional pruning and re-training techniques usually involve three steps: (1) pre-training a dense model, (2) pruning unimportant weights, and (3) re-training the pruned model to improve performance. The pruning and re-training cycles can be iterated. This iteration is taking place at least once, but it may also take place several times depending on the specific algorithms used. Therefore, the sparse networks obtained via iterative pruning at least involve pre-training a dense model. Different from the aforementioned three-step techniques, FLOPs required by Selfish-RNN is proportional to the density of the model, as it allows us to train a sparse network with a fixed number of parameters throughout training in one single run, without any re-training phases. Moreover, the overhead caused by the adaptive sparse connectivity operation is negligible, as it is operated only once per epoch.
I COMPARISON BETWEEN SELFISH-RNN AND PRUNING
It has been shown by Evci et al. (2020) that while state-of-the-art sparse training method (RigL) achieves promising performance in terms of CNNs, it fails to match the performance of pruning in RNNs. Given the fact that magnitude pruning has become a widely-used and strong baseline for model compression, we also report a comparison between Selfish-RNN and iterative magnitude pruning with stacked LSTMs. The pruning baseline here is the Tensorflow Model Pruning library (Zhu & Gupta, 2017). The results are demonstrated in Figure 6-right.
We can see that Selfish-RNN exceeds the performance of pruning in most cases. An interesting phenomenon is that, with increased sparsity, we see a decreased performance gap between SelfishRNN and pruning. Especially, Selfish-RNN performs worse than pruning when the sparsity level is 95%. This can be attributed to the poor trainability problem of sparse models with extreme sparsity levels. Noted in Lee et al. (2020), the extreme sparse structure can break dynamical isometry (Saxe et al., 2014) of sparse networks, which degrades the trainability of sparse neural networks. Different from sparse training methods, pruning operates from a dense network and thus, does not have this problem.
J SPARSE TOPOLOGY DISTANCE MEASUREMENT
Our sparse topology distance measurement considers the unit alignment based on a semi-matching technique introduced by Li et al. (2016) and a graph distance measurement based on graph edit distance (GED) (Sanfeliu & Fu, 1983). More specifically, our measurement includes the following steps:
Step 1: We train two sparse networks with dynamic sparse training on the training dataset and store the sparse topology after each epoch. Let Wil be the set of sparse topologies for the l-th layer of network i.
Step 2: Using the saved model, we compute the activity output on the test data, Oil ∈ Rn×m, where n is the number of hidden units and m is the number of samples.
Step 3: We leverage the activity units of each layer to pair-wisely match topologies Wil . We achieve unit matching between a pair of networks by finding the unit in one network with the maximum correlation to the one in the other network.
Step 4: After alignment, we apply graph edit distance (GED) to measure the similarity between pairwise Wil . Eventually, the distance is scaled to lie between 0 and 1. The smaller the distance is, the more similar the two sparse topologies are.
Here, We choose stacked LSTMs on PTB dataset as a specific case to analyze. Specifically, we train two stacked LSTMs for 100 epochs with different random seeds. We choose a relatively small removing rate of 0.1. We start alignment at the 5th epoch to ensure a good alignment result, as at the very beginning of training networks do not learn very well. We then use the matched order of output tensors to align the pairwise topologies Wil .
K TOPOLOGICAL DISTANCE OF GROWTH METHODS
In this section, we empirically illustrate that gradient growth drives different networks into some similar connectivity patterns based on the proposed distance measurement between sparse connectivities. The initial removing rates are set as 0.1 for all training runs in this section. First, we measure the topological distance between two different training runs trained with gradient growth and random growth, respectively, as shown in Figure 7. We can see that, starting with very different sparse connectivity topologies, two networks trained with random growth end up at the same distance, whereas the topological distance between networks trained with gradient growth is continuously decreasing and this tendency is likely to continue as the training goes on. We further report the distance between two networks with same initialization but different training seeds when trained with gradient growth and random growth, respectively. As shown in Figure 8, the distance between sparse networks discovered by gradient growth is smaller than the distance between sparse networks discovered by random growth. These observations are in line with our hypothesis that gradient growth drives networks into some similar structures, whereas random growth explores more sparse structures spanned over the dense networks.
L FLOPS ANALYSIS OF DIFFERENT APPROACHES
We follow the way of calculating training FLOPs layer by layer based on sparsity level sl, proposed by Evci et al. (2020). We split the process of training a sparse recurrent neural network into two steps: forward pass and backward pass.
Forward pass In order to calculate the loss of the current models given a batch of input data, the output of each layer is needed to be calculated based on a linear transformation and a non-linear activation function. Within each RNN layer, different cell weights are used to regulate information in sequence using the output of the previous time step and the input of this time step.
Backward pass In order to update weights, during the backward pass, each layer calculates 2 quantities: the gradient of the loss function with respect to the activations of the previous layer and the gradient of the loss function with respect to its own weights. Therefore, the computational expense of backward pass is twice that of forward pass. Given that RNN models usually contain an embedding layer from which it is very efficient to pick a word vector, for models not using weight tying, we
only count the computations to calculate the gradient of its parameters as the training FLOPs and we omit its inference FLOPs. For models using weight tying, both the training FLOPs and the inference FLOPs are omitted.
Given a specific architecture, we denote fD as dense FLOPs required to finish one training iteration and fS as the corresponding sparse FLOPs (fS ≈ (1− S)fD), where S is the sparsity level. Thus fS fD for very sparse networks. Since different sparse training methods cause different sparse distribution, their FLOPs fS are also different from each other. We omit the FLOPs used to update the sparse connectivity, as it is only performed once per epoch. Overall, the total FLOPs required for one training update on one single sample are given in Table 8. The training FLOPs of dense-to-sparse methods like, ISS and pruning, are 3fD ∗ st, where st is the sparsity of the model at iteration t. Since dense-to-sparse methods require to train a dense model for a while, their training FLOPs and memory requirement are higher than our method. For methods that allow the sparsity of each layer dynamically changing e.g., DSR and SNFS, we approximate their training FLOPs via their final distribution, as their sparse distribution converge to the final distribution in the first few epochs. ER distribution causes a bit more inference FLOPs than uniform distribution because is allocates more weights to the RNN layers than other layers. SNFS requires extra FLOPs to calculate dense gradients during the backward pass. Although RigL also uses the dense gradients to assist weight growth, it only needs to calculate dense gradients every ∆T iterations, thus its averaged FLOPs is given by 3fS∆T+2fS+fD ∆T+1 . Here, we simply omit the extra FLOPs required by gradient-based growth, as it is negligible compared with the whole training FLOPs.
For inference, we calculate the inference FLOPs on single sample based on the final sparse distribution learned by different methods.
M FINAL CELL WEIGHT SPARSITY BREAKDOWN
We further study the final sparsity level across cell weights learned automatically by our method. We find a consistent observation that the weight of forget gates, either the forget gate in the standard LSTM or the master forget gate in ON-LSTM, tend to be sparser than the weight of other gates, whereas the weight of cell gates and output gates are denser than the average, as shown in Figure 9. However, there is no big difference between weights in RHN, although the H nonlinear transform weight is slightly sparser than the T gate weight in most RHN layers. This phenomenon is in line with the Ablation analysis where the cell weight redistribution does not provide performance improvement for RHNs. Cell weight redistribution is more important for models with more regulating weights.
N LIMITATION
The aforementioned training benefits have not been fully explored, as off-the-shelf software and hardware have limited support for sparse operations. The unstructured sparsity is difficult to be efficiently mapped to the existing parallel processors. The results of our paper provide motivation for new types of hardware accelerators and libraries with better support for sparse neural networks. Nevertheless, many recent works have been developed to accelerate sparse neural networks including Gray et al. (2017); Moradi et al. (2019); Ma et al. (2019); Yang & Ma (2019); Liu et al. (2020b). For instance, NVIDIA introduces the A100 GPU enabling the Fine-Grained Structured Sparsity (NVIDIA, 2020). The sparse structure is enforced by allowing two nonzero values in every four-entry vector to reduce memory storage and bandwidth by almost 2×. We do not claim that Selfish-RNN is the best way to obtain sparse recurrent neural networks, but simply highlights that it is an important future research direction to develop more efficient hardware and software to benefit from sparse neural networks. | 1. What is the focus and contribution of the paper on training sparse recurrent models?
2. What are the strengths and weaknesses of the proposed method, particularly regarding its architectural choices and empirical results?
3. Do you have any concerns or suggestions regarding the explanations provided in section 3 of the paper?
4. How do the authors choose hyperparameters for various models and approaches in their experimental campaigns?
5. What is the significance of table 5 in the appendix, and how does it influence the performance of the DSR?
6. Can the authors provide more insights into the reason behind the benefits of using the random growth approach?
7. How does the proposed method compare with other recent approaches that use sparsity in recurrent models, such as "Intrinsically Sparse Long Short-Term Memory Networks"? | Review | Review
In this paper, the authors propose an approach to train sparse recurrent models, and a sparse variant of the NT-ASGD. The proposed method mixes some interesting novel methodologies and achieves interesting empirical results on Penn Treebank and WikiText-2 language modeling tasks. In general, the paper is well written and interesting, but in section 3 many explanations about the rationale behind some architectural choices of the selfish-RNN methodology are only partially explained, and sometimes they are just related to empirical results (e.g. in the cell weight redistribution). To me, a more theoretical explanation would significantly improve the manuscript readability. In section 4 many different approaches were considered. But there are a few points that are not clear. The authors report the results of a “small" dense network, but no information about this model is reported in the text. Reading the results reported in table 5 of the appendix, I found it interesting that the performance of the DSR improves significantly by using SNT-ASGD instead of Adam (it outperforms the Selfish-RNN). This table shows how much the optimizer influences model performance. Even if the ablation study reported in appendix A highlights the benefits of the SNT-ASGD, the results reported in table 5 show that the impact of this component is even more important than the selfish-RNN. Honestly, I think that is fairer to compare all the methods using the same optimization algorithm, therefore my suggestion is to move this table in the main paper and extend the analysis of these results. Reading the manuscript it is not clear how the hyper-parameters considered in the experimental campaigns have been chosen. By reading the first part of section 4.1 seems like parameters like the removing rate or the number of epochs are set without performing any validation on them. Even in appendix D, hyper-parameters (e.g. the learning rate, or the batch size) used to test the RHM are just listed. The authors should insert a more extensive explanation about how the hyper-parameters various models/approaches considered in the comparison have been validated. To perform a fair comparison the hyper-parameters of each model should be chosen according to its performance on the validation set. In this regard, it is important also to highlight how the hyper-parameters are chosen because some SOTA models achieved better results. For instance on the Penn Treebank dataset in “On The State Of The Art Of Evaluation In Neural Language Models”, Melis et al. report perplexities on the test set of 59.7. exploiting better the research space. The reported results in the paper (and in Appendix L) show the benefits of using this approach, but honestly, to me, it is not clear if it helps in exploring the state space. In general, it is not clear what is the reason why the model benefits from using the random growth approach. Moreover, in “Sparse evolutionary deep learning with over one million artificial neurons on commodity hardware” the gradient guided growth strategy outperforms the other sparse training approaches considered in the paper, even in the RNN case. Therefore a more extended evaluation/discussion of this point is required. Another recently proposed approach that uses sparsity in recurrent models is defined in “Intrinsically Sparse Long Short-Term Memory Networks” by Liu et al. the author should compare this approach with the selfish-LSTM. |
ICLR | Title
Selfish Sparse RNN Training
Abstract
Sparse neural networks have been widely applied to reduce the necessary resource requirements to train and deploy over-parameterized deep neural networks. For inference acceleration, methods that induce sparsity from a pre-trained dense network (dense-to-sparse) work effectively. Recently, dynamic sparse training (DST) has been proposed to train sparse neural networks without pre-training a large and dense network (sparse-to-sparse), so that the training process can also be accelerated. However, previous sparse-to-sparse methods mainly focus on Multilayer Perceptron Networks (MLPs) and Convolutional Neural Networks (CNNs), failing to match the performance of dense-to-sparse methods in Recurrent Neural Networks (RNNs) setting. In this paper, we propose an approach to train sparse RNNs with a fixed parameter count in one single run, without compromising performance. During training, we allow RNN layers to have a non-uniform redistribution across cell weights for a better regularization. Further, we introduce SNT-ASGD, a variant of the averaged stochastic gradient optimizer, which significantly improves the performance of all sparse training methods for RNNs. Using these strategies, we achieve state-of-the-art sparse training results, even better than dense model results, with various types of RNNs on Penn TreeBank and Wikitext-2 datasets.
1 INTRODUCTION
Recurrent neural networks (RNNs) (Elman, 1990), with a variant of long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997), have been highly successful in various fields, including language modeling (Mikolov et al., 2010), machine translation (Kalchbrenner & Blunsom, 2013), question answering (Hirschman et al., 1999; Wang & Jiang, 2017), etc. As a standard task to evaluate models’ ability to capture long-range context, language modeling has witnessed great progress in RNNs. Mikolov et al. (2010) demonstrated that RNNs perform much better than backoff models for language modeling. After that, various novel RNN architectures such as Recurrent Highway Networks (RHNs) (Zilly et al., 2017), Pointer Sentinel Mixture Models (Merity et al., 2017), Neural Cache Model (Grave et al., 2017), Mixture of Softmaxes (AWD-LSTM-MoS) (Yang et al., 2018), ordered neurons LSTM (ON-LSTM) (Shen et al., 2019), and effective regularization like variational dropout (Gal & Ghahramani, 2016), weight tying (Inan et al., 2017), DropConnect (Merity et al., 2018) have been proposed to significantly improve the performance of RNNs.
At the same time, as the performance of deep neural networks (DNNs) improves, the resources required to train and deploy deep models are becoming prohibitively large. To tackle this problem, various dense-to-sparse methods have been developed, including but not limited to pruning (LeCun et al., 1990; Han et al., 2015), Bayesian methods (Louizos et al., 2017a; Molchanov et al., 2017), distillation (Hinton et al., 2015), L1 Regularization (Wen et al., 2018), and low-rank decomposition (Jaderberg et al., 2014). Given a pre-trained model, these methods work effectively to accelerate the inference. Recently, some dynamic sparse training (DST) approaches (Mocanu et al., 2018; Mostafa & Wang, 2019; Dettmers & Zettlemoyer, 2019; Evci et al., 2020) have been proposed to bring efficiency for both, the training phase and the inference phase by dynamically changing the sparse connectivity during training. However, previous approaches are mainly for CNNs. For RNNs, the long-term dependencies and repetitive usage of recurrent cells make them more difficult to be sparsified (Kalchbrenner et al., 2018; Evci et al., 2020). More importantly, the state-of-the-art performance achieved by RNNs on language modeling is mainly associated with the optimizer, averaged stochastic gradient descent (ASGD) (Polyak & Juditsky, 1992), which is not compatible with the existing DST approaches. The above-mentioned problems heavily limit the performance
of the off-the-shelf sparse training methods in the RNN field. For instance, while “The Rigged Lottery” (RigL) achieves state-of-the-art sparse training results with various CNNs, it fails to match the performance of the iterative pruning method in the RNN setting (Evci et al., 2020). In this paper, we introduce an algorithm to train sparse RNNs with a fixed number of computational costs throughout training. We abbreviate our sparse RNN training method as Selfish-RNN because our method encourages cell weights to obtain their parameters selfishly. The main contributions of this work are five-fold:
• We propose an algorithm to train sparse RNNs from scratch with a fixed number of parameters. This advantage constrains the training costs to a fraction of the costs needed for training a dense model, allowing us to choose suitable sparsity levels for different types of training platforms. • We introduce SNT-ASGD, a sparse variant of the non-monotonically triggered averaged
stochastic gradient descent optimizer, which overcomes the over-sparsified problem of the original NT-ASGD (Merity et al., 2018) caused by dynamic sparse training. • We demonstrate state-of-the-art sparse training performance with various RNN models,
including stacked LSTMs (Zaremba et al., 2014), RHNs, ordered neurons LSTM (ONLSTM) on Penn TreeBank (PTB) dataset (Marcus et al., 1993) and AWD-LSTM-MoS on WikiText-2 dataset (Melis et al., 2018). • We present an approach to analyze the evolutionary trajectory of the sparse connectivity
optimized by dynamic sparse training from the perspective of graph. With this approach, we show that there exist many good structural local optima (sparse sub-networks having equally good performance) in RNNs, which can be found in an efficient and robust manner. • Our analysis shows two surprising phenomena in the setting of RNNs contrary to CNNs:
(1) random-based weight growth performs better than gradient-based weight growth, (2) uniform sparse distribution performs better than Erdős-Rényi (ER) sparse initialization. These results highlight the need to choose different sparse training methods for different architectures.
2 RELATED WORK
Dense-to-Sparse. There are a large amount of works operating on a dense network to yield a sparse network. We divide them into three categories based on the training cost in terms of memory and computation. (1) Iterative Pruning and Retraining. To the best of our knowledge, pruning was first proposed by Janowsky (1989) and Mozer & Smolensky (1989) to yield a sparse network from a pre-trained network. Recently, Han et al. (2015) brought it back to people’s attention based on the idea of iterative pruning and retraining with modern architectures. Some recent works were proposed to further reduce the number of iterative retraining e.g., Narang et al. (2017); Zhu & Gupta (2017). Frankle & Carbin (2019) proposed the Lottery Ticket Hypothesis showing that the sub-networks (“winning tickets”) obtained via iterative pruning combined with their “lucky” initialization can outperform the dense networks. Zhou et al. (2019) discovered that the sign of their initialization is the crucial factor that
makes the “winning tickets” work. Our work shows that there exists a much more efficient and robust way to find those “winning ticketts” without any special initialization. The aforementioned methods require at least the same training cost as training a dense model, sometimes even more, as a pre-trained dense model is involved. We compare our method with state-of-the-art pruning method proposed by Zhu & Gupta (2017) in Appendix I. With fewer training costs, our method is able to discover sparse networks that can achieve lower test perplexity than iterative pruning. (2) Learning Sparsity During Training. There are also some works attempting to learn the sparse networks during training. Louizos et al. (2017b) and Wen et al. (2018) are examples that gradually enforce the network weights to zero via L0 and L1 regularization, respectively. Dai et al. (2018) proposed a singular value decomposition (SVD) based method to accelerate the training process for LSTMs. Liu et al. (2020a) proposed Dynamic Sparse Training to discover sparse structure by learning binary masks associated with network weights. However, these methods start with a fully dense network, and hence are not memory efficient. (3) One-Shot Pruning. Some works aim to find sparse neural networks by pruning once prior to the main training phase based on some salience criteria, such as connection sensitivity (Lee et al., 2019), signal propagation, (Lee et al., 2020), and gradient signal preservation (Wang et al., 2020). These techniques can find sparse networks before the standard training, but at least one iteration of dense model needs to be trained to identify the sparse sub-networks, and therefore the pruning process is not applicable to memory-limited scenarios. Additionally, one-shot pruning generally cannot match the performance of dynamic sparse training, especially at extreme sparsity levels (Wang et al., 2020).
Sparse-to-Sparse. Recently, many works have emerged to train intrinsically sparse neural networks from scratch to obtain efficiency both for training and inference. (1) Static Sparse Training. Mocanu et al. (2016) introduced intrinsically sparse networks by exploring the scale-free and small-world topological properties in Restricted Boltzmann Machines. Later, some works expand static sparse training into CNNs based on expander graphs and show comparable performance (Prabhu et al., 2018; Kepner & Robinett, 2019). (2) Dynamic Sparse Training. Mocanu et al. (2018) introduced Sparse Evolutionary Training (SET) which initializes a sparse network and dynamically changes the sparse connectivity by a simple remove-and-regrow strategy. At the same time, DeepR (Bellec et al., 2018) trained very sparse networks by sampling the sparse connectivity based on a Bayesian posterior. The iterative configuration updates have been proved to converge to a stationary distribution. Mostafa & Wang (2019) introduced Dynamic Sparse Reparameterization (DSR) to train sparse neural networks while dynamically adjusting the sparsity levels of different layers. Sparse Networks from Scratch (SNFS) (Dettmers & Zettlemoyer, 2019) improved the sparse training performance by growing free weights according to their momentum. It requires extra computation and memory to update the dense momentum tensor for each iteration. Further, Evci et al. (2020) introduced RigL which activates weights with the highest magnitude gradients. This approach grows weights expected to receive gradients with high magnitudes, while amortizing a large number of memory requirements and computational cost caused by momentum. Due to the inherent limitations of deep learning software and hardware libraries, all of the above works simulate sparsity using a binary mask over weights. More recently, Liu et al. (2020b) proved the potentials of DST by developing for the first time an independent software framework to train very large truly sparse MLPs trained with SET. However, all these works mainly focus on CNNs and MLPs, and they are not designed to match state-of-the-art performance for RNNs.
We summarize the properties of all approaches compared in this paper in Table 1. Same with SET, our method can guarantee Backward Sparse, which does not require any extra information from the removed weights. Additionally, we discuss the differences among SET, pruning techniques, and our method in Appendix H.
3 SPARSE RNN TRAINING
Our sparse RNN training method is illustrated in Figure 1 with LSTM as a specific case of RNNs. Note that our method can be easily applied to any other RNN variants. The only difference is the number of cell weights. Before training, we randomly initialize each layer at the same sparsity (the fraction of zero-valued weights), so that the training costs are proportional to the dense model at the beginning. To explore more sparse structures, while to maintain a fixed sparsity level, we need to optimize the sparse connectivity together with the corresponding weights (a combinatorial optimization problem). We apply dynamic sparse connectivity and SNT-ASGD to handle this combinatorial optimization problem. The pseudocode of the full training procedure of our algorithm is shown in Algorithm 1.
3.1 DYNAMIC SPARSE CONNECTIVITY
We consider uniform sparse initialization, magnitude weight removal, random weight growth, cell weight redistribution together as main components of our dynamic sparse connectivity method, which can ensure a fixed number of parameters and a clear sparse backward pass, as discussed next. Notation. Given a dataset of N samples D = {(xi, yi)}Ni=1 and a network f(x; θ) parameterized by θ. We train the network to minimize the loss function ∑N i=1 L(f(xi; θ), yi). The basic mechanism of sparse neural networks is to use a fraction of parameters to reparameterize the whole network, while preserving the performance as much as possible. Hence, a sparse neural network can be denoted as fs(x; θs) with a sparsity level S = 1− ‖θs‖0‖θ‖0 , where ‖ · ‖0 is the `0-norm. Uniform Sparse Initialization. First, the network is uniformly initialized with a sparse distribution in which the sparsity level of each layer is the same S. More precisely, the network is initialized by:
θs = θ M (1)
where θ is a dense weight tensor initialized in a standard way; M is a binary tensor, in which nonzero elements are sampled uniformly based on the sparsity S; refers to the Hadamard product. Magnitude Weight Removal. For non-RNN layers, we use magnitude weight removal followed by random weight growth to update the sparse connectivity. We remove a fraction p of weights with the smallest magnitude after each training epoch. This step is performed by changing the binary tensor M , as follows: M =M − P (2) where P is a binary tensor with the same shape as M , in which the nonzero elements have the same indices with the top-p smallest-magnitude nonzero weights in θs, with ||P ||0 = p||M ||0. Random Weight Growth. To keep a fixed parameter count, we randomly grow the same number of weights immediately after weight removal, by:
M =M +R (3)
where R is a binary tensor where the nonzero elements are randomly located at the position of zero elements of M . We choose random growth to get rid of using any information of the non-existing weights, so that both feedforward and backpropagation are completely sparse. It is more desirable to have such pure sparse structures as it enables the possibility of conceiving in the future specialized hardware accelerators for sparse neural networks. Besides, our analysis of growth methods in Section 4.3 shows that random growth can explore more sparse structural degrees of freedom than gradient growth, which might be crucial to the sparse training. Cell Weight Redistribution. Our dynamic sparse connectivity differs from previous methods mainly in cell weight redistribution. For RNN layers, the naive approach is to sparsify all cell weight tensors independently at the same sparsity, as shown in Liu et al. (2019) which is a straightforward extension of applying SET to RNNs. Essentially, it is more desirable to redistribute new parameters to cell weight tensors dependently, as all cell weight tensors collaborate together to regulate information. Intuitively, we redistribute new parameters in a way that weight tensors containing more largemagnitude weights should have more parameters. Large-magnitude weights indicate that their loss
gradients are large and few oscillations occur. Thus, weight tensors with more large-magnitude connections should be reallocated with more parameters to accelerate training. Concretely, for each RNN layer l, we remove weights dependently given by an ascending sort:
Sortp(|θl1|, |θl2|, .., |θlt|) (4)
where {θl1, θl2, ..., θlt} are all weight tensors within each cell, and Sortp returns p indices of the smallest-magnitude weights. After weight removal, new parameters are uniformly grown to each weight tensor to implement our cell weight redistribution gradually. We also tried other approaches including the mean value of the magnitude of nonzero weights or the mean value of the gradient magnitude of nonzero weights, but our approach achieves the best performance, as shown in Appendix B. We further demonstrate the final sparsity breakdown of cell weights learned by our method in Appendix M and observe that weights of forget gates are consistently sparser than other weights for all models. Note that redistributing parameters across cell weight tensors does not change the FLOP counting, as the sparsity of each layer is not changed. In contrast, the across-layer weight redistribution used by DSR and SNFS affects the sparsity level of each layer. As a result, it will change the number of floating-point operations (FLOPs).
Similar with SNFS, We also decay the removing rate p to zero with a cosine annealing. We further use Eq. (1) to enforce the sparse structure before the forward pass and after the backward pass, so that the zero-valued weights will not contribute to the loss. And all the newly activated weights are initialized to zero.
3.2 SPARSE NON-MONOTONICALLY TRIGGERED ASGD
Non-monotonically Triggered ASGD (NT-ASGD) has been shown to achieve surprising performance with various RNNs (Merity et al., 2018; Yang et al., 2018; Shen et al., 2019). However, it becomes less appealing for sparse RNNs training. Unlike dense networks in which every parameter in the model is updated at each iteration, for sparse networks, the zero-valued weights remain zero when they are not activated. Once these zero-valued weights are activated, the original averaging operation of standard NT-ASGD will immediately bring them close to zero. Thereby, after the averaging operation is triggered, the number of valid weights will decrease sharply as shown in Figure 2. To alleviate this problem, we introduce SNT-ASGD as following:
w̃i =
{ 0 if mi = 0,∀i,∑K
t=Ti wi,t (K−Ti+1) if mi = 1,∀i. (5)
where w̃i is the value returned by SNT-ASGD for weight wi; wi,t represents the actual value of weight wi at the tth iteration; mi = 1 if the weight wi exists and mi = 0 means that the weight wi does not exist; Ti is the iteration in which the weight wi grows most recently; and K is the total
number of iterations. We demonstrate the effectiveness of SNT-ASGD in Figure 2. At the beginning, trained with SGD, the number of weights with high magnitude increases fast. However, the trend starts to descend significantly once the optimization switches to NT-ASGD at the 80th epoch, whereas the trend of SNT-ASGD continues to rise after a small drop caused by the averaging operation.
To better understand how proposed components, cell weight redistribution and SNT-ASGD, improve the sparse RNN training performance, we further conduct an ablation study in Appendix A. It is clear to see that both of them lead to significant performance improvement.
4 EXPERIMENTAL RESULTS
We evaluate Selfish-RNN with various models including stacked LSTMs, RHNs, ON-LSTM on the Penn TreeBank dataset and AWD-LSTM-MoS on the WikiText-2 dataset. The performance of Selfish-RNN is compared with 5 state-of-the-art sparse inducing techniques, including Intrinsic Sparse Structures (ISS) (Wen et al., 2018), SET, DSR, SNFS, and RigL. ISS is a method to explore sparsity inside RNNs by using group Lasso regularization. We choose Adam (Kingma & Ba, 2014) optimizer for SET, DSR, SNFS, and RigL. We also evaluate our methods with two state-of-the-art RNN models, ON-LSTM on PTB and AWD-LSTM-MoS on Wikitext-2, as reported in Appendix D and Appendix E, respectively.
4.1 STACKED LSTMS
As introduced by Zaremba et al. (2014), stacked LSTMs (large) is a two-layer LSTM model with 1500 hidden units for each LSTM layer. We choose the same sparsity as ISS, 67% and 62%. We empirically found that 0.7 is a safe choice for the removing rate of stacked LSTMs. The clip norm is set to 0.25 and all models are trained for 100 epochs.
Results are shown in the left side of Table 2. To evaluate our sparse training method fairly, we also provide a new dense baseline trained with the standard NT-ASGD, achieving 6 lower test perplexity than the widely-used baseline. We also test whether a small dense network and a static sparse network
with the same number of parameters as Selfish-RNN can match the performance of Selfish-RNN. We train a dense stacked LSTMs with 700 hidden units, named as “Small”. In line with the previous studies (Mocanu et al., 2018; Mostafa & Wang, 2019; Evci et al., 2020), both static sparse networks and the small-dense network fail to match the performance of Selfish-RNN. Training a static sparse network from scratch with uniform distribution performs better than the one with ER distribution. Trained with Adam, all sparse training techniques fail to match the performance of ISS and dense models. Models trained with SNT-ASGD obtain substantially lower perplexity, and Selfish-RNN achieves the lowest one, even better than the new dense baseline with much fewer training costs.
To understand better the effect of different optimizers on different DST methods, we report the performance of all DST methods trained with Adam, momentum SGD, and SNT-ASGD. The learning rate of Adam is set as 0.001. The learning rate of momentum SGD is 2 decreased by a factor of 1.33 once the loss fails to decrease and the momentum coefficient is 0.9. The weight decay is set as 1.2e-6 for all optimizers. For SNFS (SNT-ASGD), we replace momentum of weights with their gradients, as SNT-ASGD does not involve any momentum terms. We use the same hyperparameters for all DST methods. The results are shown in Table 3. It is clear that SNT-ASGD brings significant perplexity improvements to all sparse training techniques. This further stands as empirical evidence that SNT-ASGD is crucial to improve the sparse training performance in the RNN setting. Moreover, compared with other DST methods, Selfish-RNN is quite robust to the choice of optimizers due to its simple scheme to update sparse connectivity. Advanced strategies such as across-layer weight redistribution used in DSR and SNFS, gradient-based weight growth used in RigL and SNFS heavily depend on optimizers. They might work decently for some optimization methods but may not work for others.
Additionally, note that different DST methods use different sparse distributions, leading to very different computational costs even with the same sparsity. We also report the approximated training and inference FLOPs for all methods. The FLOP gap between Selfish-RNN and RigL is very small, whereas SNFS requires more FLOPs than our method for both training and inference (see Appendix L for details on how FLOPs are calculated). ISS achieves a lower number of FLOPs, since it does not sparsify the embedding layer and therefore, their LSTM layers are much more sparse than LSTM layers obtained by other methods. This would cause a fewer number of FLOPs as LSTM layers typically require more FLOPs than other layers.
4.2 RECURRENT HIGHWAY NETWORKS
Recurrent Highway Networks (Zilly et al., 2017) is a variant of RNNs allowing RNNs to explore deeper architectures inside the recurrent transition. See Appendix C for experimental settings of RHN. The results are shown in the right side of Table 2. Selfish-RNN achieves better performance than the dense model with half FLOPs. Unlike the large FLOP discrepancy of stacked LSTMs, the FLOP gap between different sparse training techniques for RHNs is very small, except SNFS which requires computing dense momentum for each iteration. Additionally, ISS has similar FLOPs with Selfish-RNN for RHN, as it sparsifies the embedding layer as well.
4.3 ANALYZING THE PERFORMANCE OF SELFISH-RNN
Analysis of Evolutionary Trajectory of Sparse Connectivity. The fact that Selfish-RNN consistently achieves good performance with different runs naturally raises some questions: e.g., are final sparse connectivities obtained by different runs similar or very different? Is the distance between the original sparse connectivity and the final sparse connectivity large or small? To answer these questions, we investigate a method based on graph edit distance (GED) (Sanfeliu & Fu, 1983) to measure the topological distance between different sparse connectivities learned by different runs. The distance is scaled between 0 and 1. The smaller the distance is, the more similar the two sparse topologies are (See Appendix J for details on how we measure the sparse topological distance).
The results are demonstrated in Figure 3. Figure 3-left shows how the topology of one randominitialized network evolves when trained with Selfish-RNN. We compare the topological distance between the sparse connectivity obtained at the 5th epoch and the sparse connectivities obtained in the following epochs. We can see that the distance gradually increases from 0 to a very high value 0.8, meaning that Selfish-RNN optimizes the initial topology to a very different one after training. Moreover, Figure 3-right illustrates that the topological distance between two same-initialized networks trained with different seeds after the 4th epoch. We can see that starting from the same sparse topology, they evolve to completely different sparse connectivities. Note that even when leading to completely different sparse connectivities, different runs achieve similarly good performance, which indicates that in the case of RNNs there exist many good local optima in terms of sparse connectivity that can have equally good performance. This phenomenon complements the findings of Liu et al. (2020c) which show that there are numerous sparse sub-networks performing similarly well in the context of MLPs.
Analysis of Sparse Initialization. We compare two types of sparse initialization, ER distribution and uniform distribution. Uniform distribution namely enforces the sparsity level of each layer to be the same as S. ER distribution allocates higher sparsity to larger layers than smaller ones. Note that its variant Erdős-Rényi-kernel proposed by Evci et al. (2020) scales back to ER for RNNs, as no kernels are involved. The results are shown as the Static group in Table 2. We can see that uniform distribution outperforms ER distribution consistently. Moreover, ER usually causes RNN layers to be less sparse than other layers, resulting in a small increase of FLOPs.
Analysis of Growth Methods. Methods that leverage gradient-based weight growth (SNFS and RigL) have shown superiority over the methods using random-based weight growth for CNNs. However, we observe a different behavior with RNNs. We set up a controlled experiment to compare these two methods with SNT-ASGD and momentum SGD. We report the results with various update intervals (the number of iterations between sparse connectivity updates) in Figure 4. Surprisingly, gradient-based growth performs worse than random-based growth in most cases. And there is an increased performance gap as the update interval increases. Our hypothesis is that random growth helps in exploring better the search space, as it naturally considers a large number of various sparse connectivities during training, which is crucial to the performance of dynamic sparse training. Differently, gradient growth drives the network topology towards some similar local optima for the sparse connectivity as it uses a greedy search strategy (highest gradient magnitude) at every topological change. However, benefits provided by high-magnitude gradients might change dynamically afterwards due to complicated interactions between weights. We empirically illustrate our hypothesis via the proposed distance measure between sparse connectivities in Appendix K.
Analysis of Hyper-parameters. The sparsity S and the initial removing rate p are two hyperparameters of our method. We show their sensitivity analysis in Appendix F and Appendix G. We find that Selfish Stacked LSTMs, RHNs, ON-LSTM, and AWD-LSTM-MoS need around 25%, 40%, 45%, and 40% parameters to reach the performance of their dense counterparts, respectively. And our method is quite robust to the choice of the initial removing rate.
5 CONCLUSION
In this paper, we proposed an approach to train sparse RNNs from scratch with a fixed parameter count throughout training. Further, we introduced SNT-ASGD, a specially designed sparse optimizer for training sparse RNNs and we showed that it substantially improves the performance of all dynamic sparse training methods in RNNs. We observed that random-based growth achieves lower perplexity than gradient-based growth in the case of RNNs. Further, we developed an approach to compare two different sparse connectivities from the perspective of graph theory. Using this approach, we found that random-based growth explores better the topological search space for optimal sparse connectivities, whereas gradient-based growth is prone to drive the network towards similar sparse connectivity patterns. opening the path for a better understanding of sparse training.
A ABLATION STUDY
To verify if the improvement shown above is caused by the cell weight redistribution or the Sparse NT-ASGD, we conduct an ablation study for all architectures. To avoid distractive factors, all models use the same hyper-parameters with the ones reported in the paper. And the use of finetuning is not excluded. We present the validation and testing perplexity for variants of our model without these two contributions, as shown in Table 4. Not surprisingly, removing either of these two novelties degrades the performance. There is a significant degradation in the performance for all models, up to 13 perplexity point, if the optimizer switches to the standard NT-ASGD. This stands as empirical evidence regarding the benefit of SNT-ASGD. Without cell weight redistribution, the testing perplexity also rises. The only exception is RHN whose number of redistributed weights in each layer is only two. This empirically shows that cell weight redistribution is more effective for the models with more cell weights.
B COMPARISON OF DIFFERENT CELL WEIGHT REDISTRIBUTION METHODS
In Table 5, we conduct a small experiment to compare different methods of cell weight redistribution with stacked LSTMs, including redistributing based on the mean value of the magnitude of nonzero weights from different cell weights and the mean value of the gradient magnitude of nonzero weights.
C EXPERIMENTAL DETAILS FOR RHN
Recurrent Highway Networks (Zilly et al., 2017) is a variant of RNNs allowing RNNs to explore deeper architecture inside the recurrent transition. Instead of stacking recurrent layers directly, RHN stacks multiple highway layers on top of recurrent state transition. Within each highway layer, free weights are redistributed across the input weight and the state weight. The sparsity level is set the same as ISS, 67.7% and 52.8%. Dropout rates are set to be 0.20 for the embedding layer, 0.65 for the input, 0.25 for the hidden units, and 0.65 for the output layer. The model is trained for 500 epochs with a learning rate of 15, a batch size of 20, and a sequence length to of 35. At the end of each training epoch, new weights are redistributed across the weights of the H nonlinear transform and the T gate.
D EXPERIMENTAL RESULTS WITH ON-LSTM
Table 6: Single model perplexity on validation and test sets for the Penn Treebank language modeling task with ON-LSTM. Methods with “ASGD” are trained with SNT-ASGD. The numbers reported are averaged over five runs.
Models #Param Val Test
Dense1000 25M 58.29± 0.10 56.17± 0.12 Dense1300 25M 58.55± 0.11 56.28± 0.19 SET 11.3M 65.90± 0.08 63.56± 0.14 DSR 11.3M 65.22± 0.07 62.55± 0.06 SNFS 11.3M 68.00± 0.10 65.52± 0.15 RigL 11.3M 64.41± 0.05 62.01± 0.13 RigL1000 (ASGD) 11.3M 59.17± 0.08 57.23± 0.09 RigL1300 (ASGD) 11.3M 59.10± 0.05 57.44± 0.15 Selfish-RNN1000 11.3M 58.17± 0.06 56.31± 0.10 Selfish-RNN1300 11.3M 57.67 ± 0.03 55.82 ± 0.11
Table 7: Single model perplexity on validation and test sets for the WikiText-2 language modeling task with AWD-LSTMMoS. Baseline is AWD-LSTM-MoS obtained from Yang et al. (2018). Methods with “ASGD” are trained with SNTASGD.
Models #Param Val Test
Dense 35M 66.01 63.33 SET 15.6M 72.82 69.61 DSR 15.6M 69.95 66.93 SNFS 15.6M 79.97 76.18 RigL 15.6M 71.36 68.52 RigL (ASGD) 15.6M 68.84 65.18
Selfish-RNN 15.6M 65.96 63.05
Proposed by Shen et al. (2019) recently, ON-LSTM can learn the latent tree structure of natural language by learning the order of neurons. For a fair comparison, we use exactly the same model hyper-parameters and regularization used in ON-LSTM. We set the sparsity of each layer to 55% and the initial removing rate to 0.5. We train the model for 1000 epochs and rerun SNT-ASGD as a fine-tuning step once at the 500th epoch, dubbed as Selfish-RNN1000. As shown in Table 6, Selfish-RNN outperforms the dense model while reducing the model size to 11.3M. Without SNT-ASGD, sparse training techniques can not reduce the test perplexity to 60. SNT-ASGD is able to improve the performance of RigL by 5 perplexity. Moreover, one interesting observation is that one of the regularizations used in the standard ON-LSTM, DropConnect, is perfectly compatible with our method, although it also drops the hidden-to-hidden weights out randomly during training.
In our experiments we observe that Selfish-RNN benefits significantly from the second fine-tuning operation. We scale the learning schedule to 1300 epochs with two fine-tuning operations after 500 and 1000 epochs, respectively, dubbed as Selfish-RNN1300. It is interesting that Selfish-RNN1300 can achieve lower testing perplexity after the second fine-tuning step, whereas the dense model Dense1300 can not even reach again the perplexity that it had before the second fine-tuning. The heuristic explanation here is that our method helps the optimization escape the local optima or a local saddle point by optimizing the sparse structure, while for dense models whose energy landscape is fixed, it is very difficult for the optimizer to find its way off the saddle point or the local optima.
E EXPERIMENTAL RESULTS WITH AWD-LSTM-MOS
We also evaluate Selfish-RNN on the WikiText-2 dataset. The model we choose is AWD-LSTM-MoS (Yang et al., 2018), which is the state-of-the-art RNN-based language model. It replaces Softmax with Mixture of Softmaxes (MoS) to alleviate the Softmax bottleneck issue in modeling natural language. For a fair comparison, we exactly follow the model hyper-parameters and regularization used in AWD-LSTM-MoS. We sparsify all layers with 55% sparsity except for the prior layer as its number of parameters is negligible. We train our model for 1000 epochs without finetuning or dynamical evaluation (Krause et al., 2018) to simply show the effectiveness of our method. As demonstrated in Table 7. Selfish AWD-LSTM-MoS can reach dense performance with 15.6M parameters.
F EFFECT OF SPARSITY
There is a trade-off between the sparsity level S and the test perplexity of Selfish-RNN. When there are too few parameters, the sparse neural network will not have enough capacity to model the data. If the sparsity level is too small, the training acceleration will be small. Here, we analyze this trade-off by varying the sparsity level while keeping the other experimental setup the same, as shown in
Figure 5a. We find that Selfish Stacked LSTMs, RHNs, ON-LSTM, and AWD-LSTM-MoS need around 25%, 40%, 45%, and 40% parameters to reach the performance of their dense counterparts, respectively. Generally, the performance of sparsified models is decreasing as the sparsity level increases.
G EFFECT OF INITIAL REMOVING RATE
The initial removing rate p determines the number of removed weights at each connectivity update. We study the performance sensitivity of our algorithm to the initial removing rate p by varying it ∈ [0.3, 0.5, 0.7]. We set the sparsity level of each model as the one having the best performance in Figure 5a. Results are shown in Figure 5b. We can clearly see that our method is very robust to the choice of the initial removing rate.
H DIFFERENCE AMONG SET, SELFISH-RNN AND ITERATIVE PRUNING METHODS
The topology update strategy of Selfish-RNN differs from SET in several important features. (1) we automatically redistribute weights across cell weights for better regularization, (2) we use magnitudebased removal instead of removing a fraction of the smallest positive weights and the largest negative weights, (3) we use uniform initialization rather than non-uniform sparse distribution like ER or ERK, as it consistently achieves better performance. Additionally, the optimizer proposed in this work, SNT-ASGD, brings substantial perplexity improvement to the sparse RNN training.
Figure 6-left illustrates a high-level overview from an efficiency perspective of the difference between Selfish-RNN and iterative pruning techniques (Han et al., 2016; Zhu & Gupta, 2017; Frankle & Carbin, 2019). The conventional pruning and re-training techniques usually involve three steps: (1) pre-training a dense model, (2) pruning unimportant weights, and (3) re-training the pruned model to improve performance. The pruning and re-training cycles can be iterated. This iteration is taking place at least once, but it may also take place several times depending on the specific algorithms used. Therefore, the sparse networks obtained via iterative pruning at least involve pre-training a dense model. Different from the aforementioned three-step techniques, FLOPs required by Selfish-RNN is proportional to the density of the model, as it allows us to train a sparse network with a fixed number of parameters throughout training in one single run, without any re-training phases. Moreover, the overhead caused by the adaptive sparse connectivity operation is negligible, as it is operated only once per epoch.
I COMPARISON BETWEEN SELFISH-RNN AND PRUNING
It has been shown by Evci et al. (2020) that while state-of-the-art sparse training method (RigL) achieves promising performance in terms of CNNs, it fails to match the performance of pruning in RNNs. Given the fact that magnitude pruning has become a widely-used and strong baseline for model compression, we also report a comparison between Selfish-RNN and iterative magnitude pruning with stacked LSTMs. The pruning baseline here is the Tensorflow Model Pruning library (Zhu & Gupta, 2017). The results are demonstrated in Figure 6-right.
We can see that Selfish-RNN exceeds the performance of pruning in most cases. An interesting phenomenon is that, with increased sparsity, we see a decreased performance gap between SelfishRNN and pruning. Especially, Selfish-RNN performs worse than pruning when the sparsity level is 95%. This can be attributed to the poor trainability problem of sparse models with extreme sparsity levels. Noted in Lee et al. (2020), the extreme sparse structure can break dynamical isometry (Saxe et al., 2014) of sparse networks, which degrades the trainability of sparse neural networks. Different from sparse training methods, pruning operates from a dense network and thus, does not have this problem.
J SPARSE TOPOLOGY DISTANCE MEASUREMENT
Our sparse topology distance measurement considers the unit alignment based on a semi-matching technique introduced by Li et al. (2016) and a graph distance measurement based on graph edit distance (GED) (Sanfeliu & Fu, 1983). More specifically, our measurement includes the following steps:
Step 1: We train two sparse networks with dynamic sparse training on the training dataset and store the sparse topology after each epoch. Let Wil be the set of sparse topologies for the l-th layer of network i.
Step 2: Using the saved model, we compute the activity output on the test data, Oil ∈ Rn×m, where n is the number of hidden units and m is the number of samples.
Step 3: We leverage the activity units of each layer to pair-wisely match topologies Wil . We achieve unit matching between a pair of networks by finding the unit in one network with the maximum correlation to the one in the other network.
Step 4: After alignment, we apply graph edit distance (GED) to measure the similarity between pairwise Wil . Eventually, the distance is scaled to lie between 0 and 1. The smaller the distance is, the more similar the two sparse topologies are.
Here, We choose stacked LSTMs on PTB dataset as a specific case to analyze. Specifically, we train two stacked LSTMs for 100 epochs with different random seeds. We choose a relatively small removing rate of 0.1. We start alignment at the 5th epoch to ensure a good alignment result, as at the very beginning of training networks do not learn very well. We then use the matched order of output tensors to align the pairwise topologies Wil .
K TOPOLOGICAL DISTANCE OF GROWTH METHODS
In this section, we empirically illustrate that gradient growth drives different networks into some similar connectivity patterns based on the proposed distance measurement between sparse connectivities. The initial removing rates are set as 0.1 for all training runs in this section. First, we measure the topological distance between two different training runs trained with gradient growth and random growth, respectively, as shown in Figure 7. We can see that, starting with very different sparse connectivity topologies, two networks trained with random growth end up at the same distance, whereas the topological distance between networks trained with gradient growth is continuously decreasing and this tendency is likely to continue as the training goes on. We further report the distance between two networks with same initialization but different training seeds when trained with gradient growth and random growth, respectively. As shown in Figure 8, the distance between sparse networks discovered by gradient growth is smaller than the distance between sparse networks discovered by random growth. These observations are in line with our hypothesis that gradient growth drives networks into some similar structures, whereas random growth explores more sparse structures spanned over the dense networks.
L FLOPS ANALYSIS OF DIFFERENT APPROACHES
We follow the way of calculating training FLOPs layer by layer based on sparsity level sl, proposed by Evci et al. (2020). We split the process of training a sparse recurrent neural network into two steps: forward pass and backward pass.
Forward pass In order to calculate the loss of the current models given a batch of input data, the output of each layer is needed to be calculated based on a linear transformation and a non-linear activation function. Within each RNN layer, different cell weights are used to regulate information in sequence using the output of the previous time step and the input of this time step.
Backward pass In order to update weights, during the backward pass, each layer calculates 2 quantities: the gradient of the loss function with respect to the activations of the previous layer and the gradient of the loss function with respect to its own weights. Therefore, the computational expense of backward pass is twice that of forward pass. Given that RNN models usually contain an embedding layer from which it is very efficient to pick a word vector, for models not using weight tying, we
only count the computations to calculate the gradient of its parameters as the training FLOPs and we omit its inference FLOPs. For models using weight tying, both the training FLOPs and the inference FLOPs are omitted.
Given a specific architecture, we denote fD as dense FLOPs required to finish one training iteration and fS as the corresponding sparse FLOPs (fS ≈ (1− S)fD), where S is the sparsity level. Thus fS fD for very sparse networks. Since different sparse training methods cause different sparse distribution, their FLOPs fS are also different from each other. We omit the FLOPs used to update the sparse connectivity, as it is only performed once per epoch. Overall, the total FLOPs required for one training update on one single sample are given in Table 8. The training FLOPs of dense-to-sparse methods like, ISS and pruning, are 3fD ∗ st, where st is the sparsity of the model at iteration t. Since dense-to-sparse methods require to train a dense model for a while, their training FLOPs and memory requirement are higher than our method. For methods that allow the sparsity of each layer dynamically changing e.g., DSR and SNFS, we approximate their training FLOPs via their final distribution, as their sparse distribution converge to the final distribution in the first few epochs. ER distribution causes a bit more inference FLOPs than uniform distribution because is allocates more weights to the RNN layers than other layers. SNFS requires extra FLOPs to calculate dense gradients during the backward pass. Although RigL also uses the dense gradients to assist weight growth, it only needs to calculate dense gradients every ∆T iterations, thus its averaged FLOPs is given by 3fS∆T+2fS+fD ∆T+1 . Here, we simply omit the extra FLOPs required by gradient-based growth, as it is negligible compared with the whole training FLOPs.
For inference, we calculate the inference FLOPs on single sample based on the final sparse distribution learned by different methods.
M FINAL CELL WEIGHT SPARSITY BREAKDOWN
We further study the final sparsity level across cell weights learned automatically by our method. We find a consistent observation that the weight of forget gates, either the forget gate in the standard LSTM or the master forget gate in ON-LSTM, tend to be sparser than the weight of other gates, whereas the weight of cell gates and output gates are denser than the average, as shown in Figure 9. However, there is no big difference between weights in RHN, although the H nonlinear transform weight is slightly sparser than the T gate weight in most RHN layers. This phenomenon is in line with the Ablation analysis where the cell weight redistribution does not provide performance improvement for RHNs. Cell weight redistribution is more important for models with more regulating weights.
N LIMITATION
The aforementioned training benefits have not been fully explored, as off-the-shelf software and hardware have limited support for sparse operations. The unstructured sparsity is difficult to be efficiently mapped to the existing parallel processors. The results of our paper provide motivation for new types of hardware accelerators and libraries with better support for sparse neural networks. Nevertheless, many recent works have been developed to accelerate sparse neural networks including Gray et al. (2017); Moradi et al. (2019); Ma et al. (2019); Yang & Ma (2019); Liu et al. (2020b). For instance, NVIDIA introduces the A100 GPU enabling the Fine-Grained Structured Sparsity (NVIDIA, 2020). The sparse structure is enforced by allowing two nonzero values in every four-entry vector to reduce memory storage and bandwidth by almost 2×. We do not claim that Selfish-RNN is the best way to obtain sparse recurrent neural networks, but simply highlights that it is an important future research direction to develop more efficient hardware and software to benefit from sparse neural networks. | 1. What is the focus of the paper regarding sparse training for recurrent neural networks?
2. What are the strengths of the proposed approach, particularly in terms of experimental setup and analysis?
3. What are the weaknesses of the paper, such as the concern about RNNs being outdated?
4. How does the reviewer assess the novelty and universality of the insights provided by the paper?
5. Are there any questions regarding the necessity of ASGD and its relation to eigenvalue?
6. Any suggestions for improving the clarity of certain parts of the paper? | Review | Review
Summary: The authors improve sparse for recurrent neural networks by developing a greedy redistribution rule for gates and adapting the ASGD optimizer for sparse networks. The work provides good results and a rich analysis of their and related methods.
Strong points:
very rigorous experimental setup and analysis
Solid evidence for many new insights into some sparse training phenomena. The work provides broadens our understanding of sparse training.
Weak points:
Some might complain that RNNs are outdated. I see this only as a minor weak point. Indeed, RNNs are not much used anymore, but many of the insights the paper provides are quite universal.
The fixed FLOPS only seems to be a by-product of the algorithm and particular network structure but not necessarily an algorithmic contribution. This makes the paper a bit confusing.
Recommendation (short): This is a very solid paper with exemplary experimentation and analysis. It provides many unique insights that are very valuable for anyone who wants to work in the field of sparse training. I recommend accepting this paper.
Recommendation (long): I think this paper is one of these papers, which is a very solid all-around. The authors invested quite a bit of time in creating rigorous experimental setups that test hypotheses. In particular, I like the graph analysis of sparse connective between networks. Findings of different initialization schemes and performance of other sparse training methods are precious and make the overall literature on sparse training robust. I can see that this paper may seem a bit boring and less impactful to some reviewers, but good science like this is not about being exciting but about providing rigorous results for a small problem. This paper does exactly that. I think any good conference should encourage good science by accepting papers like this one.
Comments for authors: Solid work. Here some additional comments and questions.
Please feed your paper through a grammar/spellchecker. There are multiple errors which make the paper hard to read in some sections
It is not entirely clear why ASGD is needed for good performance. Can you elaborate, please?
Do you have any idea how does ER initialization relates to eigenvalues of recurrent matrices? If you can make a connection here, it would be a quite insightful addition to the paper since the top eigenvalue of the recurrent matrix determines the overall long-term behavior of the recurrent matrix and is known to influence behavior.
I would drop the fixed FLOPS contribution and focus on the other parts of the paper. You have more than enough contributions, and the space is better devoted to making the other contributions as clear as possible.
The cell weight redistribution algorithm description is unclear. A weight cannot have "more parameters, I think you mean to say gate-neurons with large magnitude weights gain more parameters over time.
The sparse topology algorithm: Is the correlation between weights computed overall test set outputs between two networks/weights?
Figure 3, unclear. What does Figure 3 (left) show exactly? It is unclear what random initialization means: different sparsity patterns, different weight values, or both? What does the seed do here? Does it affect sparsity pattern, data order, weight values, etc.? |
ICLR | Title
Selfish Sparse RNN Training
Abstract
Sparse neural networks have been widely applied to reduce the necessary resource requirements to train and deploy over-parameterized deep neural networks. For inference acceleration, methods that induce sparsity from a pre-trained dense network (dense-to-sparse) work effectively. Recently, dynamic sparse training (DST) has been proposed to train sparse neural networks without pre-training a large and dense network (sparse-to-sparse), so that the training process can also be accelerated. However, previous sparse-to-sparse methods mainly focus on Multilayer Perceptron Networks (MLPs) and Convolutional Neural Networks (CNNs), failing to match the performance of dense-to-sparse methods in Recurrent Neural Networks (RNNs) setting. In this paper, we propose an approach to train sparse RNNs with a fixed parameter count in one single run, without compromising performance. During training, we allow RNN layers to have a non-uniform redistribution across cell weights for a better regularization. Further, we introduce SNT-ASGD, a variant of the averaged stochastic gradient optimizer, which significantly improves the performance of all sparse training methods for RNNs. Using these strategies, we achieve state-of-the-art sparse training results, even better than dense model results, with various types of RNNs on Penn TreeBank and Wikitext-2 datasets.
1 INTRODUCTION
Recurrent neural networks (RNNs) (Elman, 1990), with a variant of long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997), have been highly successful in various fields, including language modeling (Mikolov et al., 2010), machine translation (Kalchbrenner & Blunsom, 2013), question answering (Hirschman et al., 1999; Wang & Jiang, 2017), etc. As a standard task to evaluate models’ ability to capture long-range context, language modeling has witnessed great progress in RNNs. Mikolov et al. (2010) demonstrated that RNNs perform much better than backoff models for language modeling. After that, various novel RNN architectures such as Recurrent Highway Networks (RHNs) (Zilly et al., 2017), Pointer Sentinel Mixture Models (Merity et al., 2017), Neural Cache Model (Grave et al., 2017), Mixture of Softmaxes (AWD-LSTM-MoS) (Yang et al., 2018), ordered neurons LSTM (ON-LSTM) (Shen et al., 2019), and effective regularization like variational dropout (Gal & Ghahramani, 2016), weight tying (Inan et al., 2017), DropConnect (Merity et al., 2018) have been proposed to significantly improve the performance of RNNs.
At the same time, as the performance of deep neural networks (DNNs) improves, the resources required to train and deploy deep models are becoming prohibitively large. To tackle this problem, various dense-to-sparse methods have been developed, including but not limited to pruning (LeCun et al., 1990; Han et al., 2015), Bayesian methods (Louizos et al., 2017a; Molchanov et al., 2017), distillation (Hinton et al., 2015), L1 Regularization (Wen et al., 2018), and low-rank decomposition (Jaderberg et al., 2014). Given a pre-trained model, these methods work effectively to accelerate the inference. Recently, some dynamic sparse training (DST) approaches (Mocanu et al., 2018; Mostafa & Wang, 2019; Dettmers & Zettlemoyer, 2019; Evci et al., 2020) have been proposed to bring efficiency for both, the training phase and the inference phase by dynamically changing the sparse connectivity during training. However, previous approaches are mainly for CNNs. For RNNs, the long-term dependencies and repetitive usage of recurrent cells make them more difficult to be sparsified (Kalchbrenner et al., 2018; Evci et al., 2020). More importantly, the state-of-the-art performance achieved by RNNs on language modeling is mainly associated with the optimizer, averaged stochastic gradient descent (ASGD) (Polyak & Juditsky, 1992), which is not compatible with the existing DST approaches. The above-mentioned problems heavily limit the performance
of the off-the-shelf sparse training methods in the RNN field. For instance, while “The Rigged Lottery” (RigL) achieves state-of-the-art sparse training results with various CNNs, it fails to match the performance of the iterative pruning method in the RNN setting (Evci et al., 2020). In this paper, we introduce an algorithm to train sparse RNNs with a fixed number of computational costs throughout training. We abbreviate our sparse RNN training method as Selfish-RNN because our method encourages cell weights to obtain their parameters selfishly. The main contributions of this work are five-fold:
• We propose an algorithm to train sparse RNNs from scratch with a fixed number of parameters. This advantage constrains the training costs to a fraction of the costs needed for training a dense model, allowing us to choose suitable sparsity levels for different types of training platforms. • We introduce SNT-ASGD, a sparse variant of the non-monotonically triggered averaged
stochastic gradient descent optimizer, which overcomes the over-sparsified problem of the original NT-ASGD (Merity et al., 2018) caused by dynamic sparse training. • We demonstrate state-of-the-art sparse training performance with various RNN models,
including stacked LSTMs (Zaremba et al., 2014), RHNs, ordered neurons LSTM (ONLSTM) on Penn TreeBank (PTB) dataset (Marcus et al., 1993) and AWD-LSTM-MoS on WikiText-2 dataset (Melis et al., 2018). • We present an approach to analyze the evolutionary trajectory of the sparse connectivity
optimized by dynamic sparse training from the perspective of graph. With this approach, we show that there exist many good structural local optima (sparse sub-networks having equally good performance) in RNNs, which can be found in an efficient and robust manner. • Our analysis shows two surprising phenomena in the setting of RNNs contrary to CNNs:
(1) random-based weight growth performs better than gradient-based weight growth, (2) uniform sparse distribution performs better than Erdős-Rényi (ER) sparse initialization. These results highlight the need to choose different sparse training methods for different architectures.
2 RELATED WORK
Dense-to-Sparse. There are a large amount of works operating on a dense network to yield a sparse network. We divide them into three categories based on the training cost in terms of memory and computation. (1) Iterative Pruning and Retraining. To the best of our knowledge, pruning was first proposed by Janowsky (1989) and Mozer & Smolensky (1989) to yield a sparse network from a pre-trained network. Recently, Han et al. (2015) brought it back to people’s attention based on the idea of iterative pruning and retraining with modern architectures. Some recent works were proposed to further reduce the number of iterative retraining e.g., Narang et al. (2017); Zhu & Gupta (2017). Frankle & Carbin (2019) proposed the Lottery Ticket Hypothesis showing that the sub-networks (“winning tickets”) obtained via iterative pruning combined with their “lucky” initialization can outperform the dense networks. Zhou et al. (2019) discovered that the sign of their initialization is the crucial factor that
makes the “winning tickets” work. Our work shows that there exists a much more efficient and robust way to find those “winning ticketts” without any special initialization. The aforementioned methods require at least the same training cost as training a dense model, sometimes even more, as a pre-trained dense model is involved. We compare our method with state-of-the-art pruning method proposed by Zhu & Gupta (2017) in Appendix I. With fewer training costs, our method is able to discover sparse networks that can achieve lower test perplexity than iterative pruning. (2) Learning Sparsity During Training. There are also some works attempting to learn the sparse networks during training. Louizos et al. (2017b) and Wen et al. (2018) are examples that gradually enforce the network weights to zero via L0 and L1 regularization, respectively. Dai et al. (2018) proposed a singular value decomposition (SVD) based method to accelerate the training process for LSTMs. Liu et al. (2020a) proposed Dynamic Sparse Training to discover sparse structure by learning binary masks associated with network weights. However, these methods start with a fully dense network, and hence are not memory efficient. (3) One-Shot Pruning. Some works aim to find sparse neural networks by pruning once prior to the main training phase based on some salience criteria, such as connection sensitivity (Lee et al., 2019), signal propagation, (Lee et al., 2020), and gradient signal preservation (Wang et al., 2020). These techniques can find sparse networks before the standard training, but at least one iteration of dense model needs to be trained to identify the sparse sub-networks, and therefore the pruning process is not applicable to memory-limited scenarios. Additionally, one-shot pruning generally cannot match the performance of dynamic sparse training, especially at extreme sparsity levels (Wang et al., 2020).
Sparse-to-Sparse. Recently, many works have emerged to train intrinsically sparse neural networks from scratch to obtain efficiency both for training and inference. (1) Static Sparse Training. Mocanu et al. (2016) introduced intrinsically sparse networks by exploring the scale-free and small-world topological properties in Restricted Boltzmann Machines. Later, some works expand static sparse training into CNNs based on expander graphs and show comparable performance (Prabhu et al., 2018; Kepner & Robinett, 2019). (2) Dynamic Sparse Training. Mocanu et al. (2018) introduced Sparse Evolutionary Training (SET) which initializes a sparse network and dynamically changes the sparse connectivity by a simple remove-and-regrow strategy. At the same time, DeepR (Bellec et al., 2018) trained very sparse networks by sampling the sparse connectivity based on a Bayesian posterior. The iterative configuration updates have been proved to converge to a stationary distribution. Mostafa & Wang (2019) introduced Dynamic Sparse Reparameterization (DSR) to train sparse neural networks while dynamically adjusting the sparsity levels of different layers. Sparse Networks from Scratch (SNFS) (Dettmers & Zettlemoyer, 2019) improved the sparse training performance by growing free weights according to their momentum. It requires extra computation and memory to update the dense momentum tensor for each iteration. Further, Evci et al. (2020) introduced RigL which activates weights with the highest magnitude gradients. This approach grows weights expected to receive gradients with high magnitudes, while amortizing a large number of memory requirements and computational cost caused by momentum. Due to the inherent limitations of deep learning software and hardware libraries, all of the above works simulate sparsity using a binary mask over weights. More recently, Liu et al. (2020b) proved the potentials of DST by developing for the first time an independent software framework to train very large truly sparse MLPs trained with SET. However, all these works mainly focus on CNNs and MLPs, and they are not designed to match state-of-the-art performance for RNNs.
We summarize the properties of all approaches compared in this paper in Table 1. Same with SET, our method can guarantee Backward Sparse, which does not require any extra information from the removed weights. Additionally, we discuss the differences among SET, pruning techniques, and our method in Appendix H.
3 SPARSE RNN TRAINING
Our sparse RNN training method is illustrated in Figure 1 with LSTM as a specific case of RNNs. Note that our method can be easily applied to any other RNN variants. The only difference is the number of cell weights. Before training, we randomly initialize each layer at the same sparsity (the fraction of zero-valued weights), so that the training costs are proportional to the dense model at the beginning. To explore more sparse structures, while to maintain a fixed sparsity level, we need to optimize the sparse connectivity together with the corresponding weights (a combinatorial optimization problem). We apply dynamic sparse connectivity and SNT-ASGD to handle this combinatorial optimization problem. The pseudocode of the full training procedure of our algorithm is shown in Algorithm 1.
3.1 DYNAMIC SPARSE CONNECTIVITY
We consider uniform sparse initialization, magnitude weight removal, random weight growth, cell weight redistribution together as main components of our dynamic sparse connectivity method, which can ensure a fixed number of parameters and a clear sparse backward pass, as discussed next. Notation. Given a dataset of N samples D = {(xi, yi)}Ni=1 and a network f(x; θ) parameterized by θ. We train the network to minimize the loss function ∑N i=1 L(f(xi; θ), yi). The basic mechanism of sparse neural networks is to use a fraction of parameters to reparameterize the whole network, while preserving the performance as much as possible. Hence, a sparse neural network can be denoted as fs(x; θs) with a sparsity level S = 1− ‖θs‖0‖θ‖0 , where ‖ · ‖0 is the `0-norm. Uniform Sparse Initialization. First, the network is uniformly initialized with a sparse distribution in which the sparsity level of each layer is the same S. More precisely, the network is initialized by:
θs = θ M (1)
where θ is a dense weight tensor initialized in a standard way; M is a binary tensor, in which nonzero elements are sampled uniformly based on the sparsity S; refers to the Hadamard product. Magnitude Weight Removal. For non-RNN layers, we use magnitude weight removal followed by random weight growth to update the sparse connectivity. We remove a fraction p of weights with the smallest magnitude after each training epoch. This step is performed by changing the binary tensor M , as follows: M =M − P (2) where P is a binary tensor with the same shape as M , in which the nonzero elements have the same indices with the top-p smallest-magnitude nonzero weights in θs, with ||P ||0 = p||M ||0. Random Weight Growth. To keep a fixed parameter count, we randomly grow the same number of weights immediately after weight removal, by:
M =M +R (3)
where R is a binary tensor where the nonzero elements are randomly located at the position of zero elements of M . We choose random growth to get rid of using any information of the non-existing weights, so that both feedforward and backpropagation are completely sparse. It is more desirable to have such pure sparse structures as it enables the possibility of conceiving in the future specialized hardware accelerators for sparse neural networks. Besides, our analysis of growth methods in Section 4.3 shows that random growth can explore more sparse structural degrees of freedom than gradient growth, which might be crucial to the sparse training. Cell Weight Redistribution. Our dynamic sparse connectivity differs from previous methods mainly in cell weight redistribution. For RNN layers, the naive approach is to sparsify all cell weight tensors independently at the same sparsity, as shown in Liu et al. (2019) which is a straightforward extension of applying SET to RNNs. Essentially, it is more desirable to redistribute new parameters to cell weight tensors dependently, as all cell weight tensors collaborate together to regulate information. Intuitively, we redistribute new parameters in a way that weight tensors containing more largemagnitude weights should have more parameters. Large-magnitude weights indicate that their loss
gradients are large and few oscillations occur. Thus, weight tensors with more large-magnitude connections should be reallocated with more parameters to accelerate training. Concretely, for each RNN layer l, we remove weights dependently given by an ascending sort:
Sortp(|θl1|, |θl2|, .., |θlt|) (4)
where {θl1, θl2, ..., θlt} are all weight tensors within each cell, and Sortp returns p indices of the smallest-magnitude weights. After weight removal, new parameters are uniformly grown to each weight tensor to implement our cell weight redistribution gradually. We also tried other approaches including the mean value of the magnitude of nonzero weights or the mean value of the gradient magnitude of nonzero weights, but our approach achieves the best performance, as shown in Appendix B. We further demonstrate the final sparsity breakdown of cell weights learned by our method in Appendix M and observe that weights of forget gates are consistently sparser than other weights for all models. Note that redistributing parameters across cell weight tensors does not change the FLOP counting, as the sparsity of each layer is not changed. In contrast, the across-layer weight redistribution used by DSR and SNFS affects the sparsity level of each layer. As a result, it will change the number of floating-point operations (FLOPs).
Similar with SNFS, We also decay the removing rate p to zero with a cosine annealing. We further use Eq. (1) to enforce the sparse structure before the forward pass and after the backward pass, so that the zero-valued weights will not contribute to the loss. And all the newly activated weights are initialized to zero.
3.2 SPARSE NON-MONOTONICALLY TRIGGERED ASGD
Non-monotonically Triggered ASGD (NT-ASGD) has been shown to achieve surprising performance with various RNNs (Merity et al., 2018; Yang et al., 2018; Shen et al., 2019). However, it becomes less appealing for sparse RNNs training. Unlike dense networks in which every parameter in the model is updated at each iteration, for sparse networks, the zero-valued weights remain zero when they are not activated. Once these zero-valued weights are activated, the original averaging operation of standard NT-ASGD will immediately bring them close to zero. Thereby, after the averaging operation is triggered, the number of valid weights will decrease sharply as shown in Figure 2. To alleviate this problem, we introduce SNT-ASGD as following:
w̃i =
{ 0 if mi = 0,∀i,∑K
t=Ti wi,t (K−Ti+1) if mi = 1,∀i. (5)
where w̃i is the value returned by SNT-ASGD for weight wi; wi,t represents the actual value of weight wi at the tth iteration; mi = 1 if the weight wi exists and mi = 0 means that the weight wi does not exist; Ti is the iteration in which the weight wi grows most recently; and K is the total
number of iterations. We demonstrate the effectiveness of SNT-ASGD in Figure 2. At the beginning, trained with SGD, the number of weights with high magnitude increases fast. However, the trend starts to descend significantly once the optimization switches to NT-ASGD at the 80th epoch, whereas the trend of SNT-ASGD continues to rise after a small drop caused by the averaging operation.
To better understand how proposed components, cell weight redistribution and SNT-ASGD, improve the sparse RNN training performance, we further conduct an ablation study in Appendix A. It is clear to see that both of them lead to significant performance improvement.
4 EXPERIMENTAL RESULTS
We evaluate Selfish-RNN with various models including stacked LSTMs, RHNs, ON-LSTM on the Penn TreeBank dataset and AWD-LSTM-MoS on the WikiText-2 dataset. The performance of Selfish-RNN is compared with 5 state-of-the-art sparse inducing techniques, including Intrinsic Sparse Structures (ISS) (Wen et al., 2018), SET, DSR, SNFS, and RigL. ISS is a method to explore sparsity inside RNNs by using group Lasso regularization. We choose Adam (Kingma & Ba, 2014) optimizer for SET, DSR, SNFS, and RigL. We also evaluate our methods with two state-of-the-art RNN models, ON-LSTM on PTB and AWD-LSTM-MoS on Wikitext-2, as reported in Appendix D and Appendix E, respectively.
4.1 STACKED LSTMS
As introduced by Zaremba et al. (2014), stacked LSTMs (large) is a two-layer LSTM model with 1500 hidden units for each LSTM layer. We choose the same sparsity as ISS, 67% and 62%. We empirically found that 0.7 is a safe choice for the removing rate of stacked LSTMs. The clip norm is set to 0.25 and all models are trained for 100 epochs.
Results are shown in the left side of Table 2. To evaluate our sparse training method fairly, we also provide a new dense baseline trained with the standard NT-ASGD, achieving 6 lower test perplexity than the widely-used baseline. We also test whether a small dense network and a static sparse network
with the same number of parameters as Selfish-RNN can match the performance of Selfish-RNN. We train a dense stacked LSTMs with 700 hidden units, named as “Small”. In line with the previous studies (Mocanu et al., 2018; Mostafa & Wang, 2019; Evci et al., 2020), both static sparse networks and the small-dense network fail to match the performance of Selfish-RNN. Training a static sparse network from scratch with uniform distribution performs better than the one with ER distribution. Trained with Adam, all sparse training techniques fail to match the performance of ISS and dense models. Models trained with SNT-ASGD obtain substantially lower perplexity, and Selfish-RNN achieves the lowest one, even better than the new dense baseline with much fewer training costs.
To understand better the effect of different optimizers on different DST methods, we report the performance of all DST methods trained with Adam, momentum SGD, and SNT-ASGD. The learning rate of Adam is set as 0.001. The learning rate of momentum SGD is 2 decreased by a factor of 1.33 once the loss fails to decrease and the momentum coefficient is 0.9. The weight decay is set as 1.2e-6 for all optimizers. For SNFS (SNT-ASGD), we replace momentum of weights with their gradients, as SNT-ASGD does not involve any momentum terms. We use the same hyperparameters for all DST methods. The results are shown in Table 3. It is clear that SNT-ASGD brings significant perplexity improvements to all sparse training techniques. This further stands as empirical evidence that SNT-ASGD is crucial to improve the sparse training performance in the RNN setting. Moreover, compared with other DST methods, Selfish-RNN is quite robust to the choice of optimizers due to its simple scheme to update sparse connectivity. Advanced strategies such as across-layer weight redistribution used in DSR and SNFS, gradient-based weight growth used in RigL and SNFS heavily depend on optimizers. They might work decently for some optimization methods but may not work for others.
Additionally, note that different DST methods use different sparse distributions, leading to very different computational costs even with the same sparsity. We also report the approximated training and inference FLOPs for all methods. The FLOP gap between Selfish-RNN and RigL is very small, whereas SNFS requires more FLOPs than our method for both training and inference (see Appendix L for details on how FLOPs are calculated). ISS achieves a lower number of FLOPs, since it does not sparsify the embedding layer and therefore, their LSTM layers are much more sparse than LSTM layers obtained by other methods. This would cause a fewer number of FLOPs as LSTM layers typically require more FLOPs than other layers.
4.2 RECURRENT HIGHWAY NETWORKS
Recurrent Highway Networks (Zilly et al., 2017) is a variant of RNNs allowing RNNs to explore deeper architectures inside the recurrent transition. See Appendix C for experimental settings of RHN. The results are shown in the right side of Table 2. Selfish-RNN achieves better performance than the dense model with half FLOPs. Unlike the large FLOP discrepancy of stacked LSTMs, the FLOP gap between different sparse training techniques for RHNs is very small, except SNFS which requires computing dense momentum for each iteration. Additionally, ISS has similar FLOPs with Selfish-RNN for RHN, as it sparsifies the embedding layer as well.
4.3 ANALYZING THE PERFORMANCE OF SELFISH-RNN
Analysis of Evolutionary Trajectory of Sparse Connectivity. The fact that Selfish-RNN consistently achieves good performance with different runs naturally raises some questions: e.g., are final sparse connectivities obtained by different runs similar or very different? Is the distance between the original sparse connectivity and the final sparse connectivity large or small? To answer these questions, we investigate a method based on graph edit distance (GED) (Sanfeliu & Fu, 1983) to measure the topological distance between different sparse connectivities learned by different runs. The distance is scaled between 0 and 1. The smaller the distance is, the more similar the two sparse topologies are (See Appendix J for details on how we measure the sparse topological distance).
The results are demonstrated in Figure 3. Figure 3-left shows how the topology of one randominitialized network evolves when trained with Selfish-RNN. We compare the topological distance between the sparse connectivity obtained at the 5th epoch and the sparse connectivities obtained in the following epochs. We can see that the distance gradually increases from 0 to a very high value 0.8, meaning that Selfish-RNN optimizes the initial topology to a very different one after training. Moreover, Figure 3-right illustrates that the topological distance between two same-initialized networks trained with different seeds after the 4th epoch. We can see that starting from the same sparse topology, they evolve to completely different sparse connectivities. Note that even when leading to completely different sparse connectivities, different runs achieve similarly good performance, which indicates that in the case of RNNs there exist many good local optima in terms of sparse connectivity that can have equally good performance. This phenomenon complements the findings of Liu et al. (2020c) which show that there are numerous sparse sub-networks performing similarly well in the context of MLPs.
Analysis of Sparse Initialization. We compare two types of sparse initialization, ER distribution and uniform distribution. Uniform distribution namely enforces the sparsity level of each layer to be the same as S. ER distribution allocates higher sparsity to larger layers than smaller ones. Note that its variant Erdős-Rényi-kernel proposed by Evci et al. (2020) scales back to ER for RNNs, as no kernels are involved. The results are shown as the Static group in Table 2. We can see that uniform distribution outperforms ER distribution consistently. Moreover, ER usually causes RNN layers to be less sparse than other layers, resulting in a small increase of FLOPs.
Analysis of Growth Methods. Methods that leverage gradient-based weight growth (SNFS and RigL) have shown superiority over the methods using random-based weight growth for CNNs. However, we observe a different behavior with RNNs. We set up a controlled experiment to compare these two methods with SNT-ASGD and momentum SGD. We report the results with various update intervals (the number of iterations between sparse connectivity updates) in Figure 4. Surprisingly, gradient-based growth performs worse than random-based growth in most cases. And there is an increased performance gap as the update interval increases. Our hypothesis is that random growth helps in exploring better the search space, as it naturally considers a large number of various sparse connectivities during training, which is crucial to the performance of dynamic sparse training. Differently, gradient growth drives the network topology towards some similar local optima for the sparse connectivity as it uses a greedy search strategy (highest gradient magnitude) at every topological change. However, benefits provided by high-magnitude gradients might change dynamically afterwards due to complicated interactions between weights. We empirically illustrate our hypothesis via the proposed distance measure between sparse connectivities in Appendix K.
Analysis of Hyper-parameters. The sparsity S and the initial removing rate p are two hyperparameters of our method. We show their sensitivity analysis in Appendix F and Appendix G. We find that Selfish Stacked LSTMs, RHNs, ON-LSTM, and AWD-LSTM-MoS need around 25%, 40%, 45%, and 40% parameters to reach the performance of their dense counterparts, respectively. And our method is quite robust to the choice of the initial removing rate.
5 CONCLUSION
In this paper, we proposed an approach to train sparse RNNs from scratch with a fixed parameter count throughout training. Further, we introduced SNT-ASGD, a specially designed sparse optimizer for training sparse RNNs and we showed that it substantially improves the performance of all dynamic sparse training methods in RNNs. We observed that random-based growth achieves lower perplexity than gradient-based growth in the case of RNNs. Further, we developed an approach to compare two different sparse connectivities from the perspective of graph theory. Using this approach, we found that random-based growth explores better the topological search space for optimal sparse connectivities, whereas gradient-based growth is prone to drive the network towards similar sparse connectivity patterns. opening the path for a better understanding of sparse training.
A ABLATION STUDY
To verify if the improvement shown above is caused by the cell weight redistribution or the Sparse NT-ASGD, we conduct an ablation study for all architectures. To avoid distractive factors, all models use the same hyper-parameters with the ones reported in the paper. And the use of finetuning is not excluded. We present the validation and testing perplexity for variants of our model without these two contributions, as shown in Table 4. Not surprisingly, removing either of these two novelties degrades the performance. There is a significant degradation in the performance for all models, up to 13 perplexity point, if the optimizer switches to the standard NT-ASGD. This stands as empirical evidence regarding the benefit of SNT-ASGD. Without cell weight redistribution, the testing perplexity also rises. The only exception is RHN whose number of redistributed weights in each layer is only two. This empirically shows that cell weight redistribution is more effective for the models with more cell weights.
B COMPARISON OF DIFFERENT CELL WEIGHT REDISTRIBUTION METHODS
In Table 5, we conduct a small experiment to compare different methods of cell weight redistribution with stacked LSTMs, including redistributing based on the mean value of the magnitude of nonzero weights from different cell weights and the mean value of the gradient magnitude of nonzero weights.
C EXPERIMENTAL DETAILS FOR RHN
Recurrent Highway Networks (Zilly et al., 2017) is a variant of RNNs allowing RNNs to explore deeper architecture inside the recurrent transition. Instead of stacking recurrent layers directly, RHN stacks multiple highway layers on top of recurrent state transition. Within each highway layer, free weights are redistributed across the input weight and the state weight. The sparsity level is set the same as ISS, 67.7% and 52.8%. Dropout rates are set to be 0.20 for the embedding layer, 0.65 for the input, 0.25 for the hidden units, and 0.65 for the output layer. The model is trained for 500 epochs with a learning rate of 15, a batch size of 20, and a sequence length to of 35. At the end of each training epoch, new weights are redistributed across the weights of the H nonlinear transform and the T gate.
D EXPERIMENTAL RESULTS WITH ON-LSTM
Table 6: Single model perplexity on validation and test sets for the Penn Treebank language modeling task with ON-LSTM. Methods with “ASGD” are trained with SNT-ASGD. The numbers reported are averaged over five runs.
Models #Param Val Test
Dense1000 25M 58.29± 0.10 56.17± 0.12 Dense1300 25M 58.55± 0.11 56.28± 0.19 SET 11.3M 65.90± 0.08 63.56± 0.14 DSR 11.3M 65.22± 0.07 62.55± 0.06 SNFS 11.3M 68.00± 0.10 65.52± 0.15 RigL 11.3M 64.41± 0.05 62.01± 0.13 RigL1000 (ASGD) 11.3M 59.17± 0.08 57.23± 0.09 RigL1300 (ASGD) 11.3M 59.10± 0.05 57.44± 0.15 Selfish-RNN1000 11.3M 58.17± 0.06 56.31± 0.10 Selfish-RNN1300 11.3M 57.67 ± 0.03 55.82 ± 0.11
Table 7: Single model perplexity on validation and test sets for the WikiText-2 language modeling task with AWD-LSTMMoS. Baseline is AWD-LSTM-MoS obtained from Yang et al. (2018). Methods with “ASGD” are trained with SNTASGD.
Models #Param Val Test
Dense 35M 66.01 63.33 SET 15.6M 72.82 69.61 DSR 15.6M 69.95 66.93 SNFS 15.6M 79.97 76.18 RigL 15.6M 71.36 68.52 RigL (ASGD) 15.6M 68.84 65.18
Selfish-RNN 15.6M 65.96 63.05
Proposed by Shen et al. (2019) recently, ON-LSTM can learn the latent tree structure of natural language by learning the order of neurons. For a fair comparison, we use exactly the same model hyper-parameters and regularization used in ON-LSTM. We set the sparsity of each layer to 55% and the initial removing rate to 0.5. We train the model for 1000 epochs and rerun SNT-ASGD as a fine-tuning step once at the 500th epoch, dubbed as Selfish-RNN1000. As shown in Table 6, Selfish-RNN outperforms the dense model while reducing the model size to 11.3M. Without SNT-ASGD, sparse training techniques can not reduce the test perplexity to 60. SNT-ASGD is able to improve the performance of RigL by 5 perplexity. Moreover, one interesting observation is that one of the regularizations used in the standard ON-LSTM, DropConnect, is perfectly compatible with our method, although it also drops the hidden-to-hidden weights out randomly during training.
In our experiments we observe that Selfish-RNN benefits significantly from the second fine-tuning operation. We scale the learning schedule to 1300 epochs with two fine-tuning operations after 500 and 1000 epochs, respectively, dubbed as Selfish-RNN1300. It is interesting that Selfish-RNN1300 can achieve lower testing perplexity after the second fine-tuning step, whereas the dense model Dense1300 can not even reach again the perplexity that it had before the second fine-tuning. The heuristic explanation here is that our method helps the optimization escape the local optima or a local saddle point by optimizing the sparse structure, while for dense models whose energy landscape is fixed, it is very difficult for the optimizer to find its way off the saddle point or the local optima.
E EXPERIMENTAL RESULTS WITH AWD-LSTM-MOS
We also evaluate Selfish-RNN on the WikiText-2 dataset. The model we choose is AWD-LSTM-MoS (Yang et al., 2018), which is the state-of-the-art RNN-based language model. It replaces Softmax with Mixture of Softmaxes (MoS) to alleviate the Softmax bottleneck issue in modeling natural language. For a fair comparison, we exactly follow the model hyper-parameters and regularization used in AWD-LSTM-MoS. We sparsify all layers with 55% sparsity except for the prior layer as its number of parameters is negligible. We train our model for 1000 epochs without finetuning or dynamical evaluation (Krause et al., 2018) to simply show the effectiveness of our method. As demonstrated in Table 7. Selfish AWD-LSTM-MoS can reach dense performance with 15.6M parameters.
F EFFECT OF SPARSITY
There is a trade-off between the sparsity level S and the test perplexity of Selfish-RNN. When there are too few parameters, the sparse neural network will not have enough capacity to model the data. If the sparsity level is too small, the training acceleration will be small. Here, we analyze this trade-off by varying the sparsity level while keeping the other experimental setup the same, as shown in
Figure 5a. We find that Selfish Stacked LSTMs, RHNs, ON-LSTM, and AWD-LSTM-MoS need around 25%, 40%, 45%, and 40% parameters to reach the performance of their dense counterparts, respectively. Generally, the performance of sparsified models is decreasing as the sparsity level increases.
G EFFECT OF INITIAL REMOVING RATE
The initial removing rate p determines the number of removed weights at each connectivity update. We study the performance sensitivity of our algorithm to the initial removing rate p by varying it ∈ [0.3, 0.5, 0.7]. We set the sparsity level of each model as the one having the best performance in Figure 5a. Results are shown in Figure 5b. We can clearly see that our method is very robust to the choice of the initial removing rate.
H DIFFERENCE AMONG SET, SELFISH-RNN AND ITERATIVE PRUNING METHODS
The topology update strategy of Selfish-RNN differs from SET in several important features. (1) we automatically redistribute weights across cell weights for better regularization, (2) we use magnitudebased removal instead of removing a fraction of the smallest positive weights and the largest negative weights, (3) we use uniform initialization rather than non-uniform sparse distribution like ER or ERK, as it consistently achieves better performance. Additionally, the optimizer proposed in this work, SNT-ASGD, brings substantial perplexity improvement to the sparse RNN training.
Figure 6-left illustrates a high-level overview from an efficiency perspective of the difference between Selfish-RNN and iterative pruning techniques (Han et al., 2016; Zhu & Gupta, 2017; Frankle & Carbin, 2019). The conventional pruning and re-training techniques usually involve three steps: (1) pre-training a dense model, (2) pruning unimportant weights, and (3) re-training the pruned model to improve performance. The pruning and re-training cycles can be iterated. This iteration is taking place at least once, but it may also take place several times depending on the specific algorithms used. Therefore, the sparse networks obtained via iterative pruning at least involve pre-training a dense model. Different from the aforementioned three-step techniques, FLOPs required by Selfish-RNN is proportional to the density of the model, as it allows us to train a sparse network with a fixed number of parameters throughout training in one single run, without any re-training phases. Moreover, the overhead caused by the adaptive sparse connectivity operation is negligible, as it is operated only once per epoch.
I COMPARISON BETWEEN SELFISH-RNN AND PRUNING
It has been shown by Evci et al. (2020) that while state-of-the-art sparse training method (RigL) achieves promising performance in terms of CNNs, it fails to match the performance of pruning in RNNs. Given the fact that magnitude pruning has become a widely-used and strong baseline for model compression, we also report a comparison between Selfish-RNN and iterative magnitude pruning with stacked LSTMs. The pruning baseline here is the Tensorflow Model Pruning library (Zhu & Gupta, 2017). The results are demonstrated in Figure 6-right.
We can see that Selfish-RNN exceeds the performance of pruning in most cases. An interesting phenomenon is that, with increased sparsity, we see a decreased performance gap between SelfishRNN and pruning. Especially, Selfish-RNN performs worse than pruning when the sparsity level is 95%. This can be attributed to the poor trainability problem of sparse models with extreme sparsity levels. Noted in Lee et al. (2020), the extreme sparse structure can break dynamical isometry (Saxe et al., 2014) of sparse networks, which degrades the trainability of sparse neural networks. Different from sparse training methods, pruning operates from a dense network and thus, does not have this problem.
J SPARSE TOPOLOGY DISTANCE MEASUREMENT
Our sparse topology distance measurement considers the unit alignment based on a semi-matching technique introduced by Li et al. (2016) and a graph distance measurement based on graph edit distance (GED) (Sanfeliu & Fu, 1983). More specifically, our measurement includes the following steps:
Step 1: We train two sparse networks with dynamic sparse training on the training dataset and store the sparse topology after each epoch. Let Wil be the set of sparse topologies for the l-th layer of network i.
Step 2: Using the saved model, we compute the activity output on the test data, Oil ∈ Rn×m, where n is the number of hidden units and m is the number of samples.
Step 3: We leverage the activity units of each layer to pair-wisely match topologies Wil . We achieve unit matching between a pair of networks by finding the unit in one network with the maximum correlation to the one in the other network.
Step 4: After alignment, we apply graph edit distance (GED) to measure the similarity between pairwise Wil . Eventually, the distance is scaled to lie between 0 and 1. The smaller the distance is, the more similar the two sparse topologies are.
Here, We choose stacked LSTMs on PTB dataset as a specific case to analyze. Specifically, we train two stacked LSTMs for 100 epochs with different random seeds. We choose a relatively small removing rate of 0.1. We start alignment at the 5th epoch to ensure a good alignment result, as at the very beginning of training networks do not learn very well. We then use the matched order of output tensors to align the pairwise topologies Wil .
K TOPOLOGICAL DISTANCE OF GROWTH METHODS
In this section, we empirically illustrate that gradient growth drives different networks into some similar connectivity patterns based on the proposed distance measurement between sparse connectivities. The initial removing rates are set as 0.1 for all training runs in this section. First, we measure the topological distance between two different training runs trained with gradient growth and random growth, respectively, as shown in Figure 7. We can see that, starting with very different sparse connectivity topologies, two networks trained with random growth end up at the same distance, whereas the topological distance between networks trained with gradient growth is continuously decreasing and this tendency is likely to continue as the training goes on. We further report the distance between two networks with same initialization but different training seeds when trained with gradient growth and random growth, respectively. As shown in Figure 8, the distance between sparse networks discovered by gradient growth is smaller than the distance between sparse networks discovered by random growth. These observations are in line with our hypothesis that gradient growth drives networks into some similar structures, whereas random growth explores more sparse structures spanned over the dense networks.
L FLOPS ANALYSIS OF DIFFERENT APPROACHES
We follow the way of calculating training FLOPs layer by layer based on sparsity level sl, proposed by Evci et al. (2020). We split the process of training a sparse recurrent neural network into two steps: forward pass and backward pass.
Forward pass In order to calculate the loss of the current models given a batch of input data, the output of each layer is needed to be calculated based on a linear transformation and a non-linear activation function. Within each RNN layer, different cell weights are used to regulate information in sequence using the output of the previous time step and the input of this time step.
Backward pass In order to update weights, during the backward pass, each layer calculates 2 quantities: the gradient of the loss function with respect to the activations of the previous layer and the gradient of the loss function with respect to its own weights. Therefore, the computational expense of backward pass is twice that of forward pass. Given that RNN models usually contain an embedding layer from which it is very efficient to pick a word vector, for models not using weight tying, we
only count the computations to calculate the gradient of its parameters as the training FLOPs and we omit its inference FLOPs. For models using weight tying, both the training FLOPs and the inference FLOPs are omitted.
Given a specific architecture, we denote fD as dense FLOPs required to finish one training iteration and fS as the corresponding sparse FLOPs (fS ≈ (1− S)fD), where S is the sparsity level. Thus fS fD for very sparse networks. Since different sparse training methods cause different sparse distribution, their FLOPs fS are also different from each other. We omit the FLOPs used to update the sparse connectivity, as it is only performed once per epoch. Overall, the total FLOPs required for one training update on one single sample are given in Table 8. The training FLOPs of dense-to-sparse methods like, ISS and pruning, are 3fD ∗ st, where st is the sparsity of the model at iteration t. Since dense-to-sparse methods require to train a dense model for a while, their training FLOPs and memory requirement are higher than our method. For methods that allow the sparsity of each layer dynamically changing e.g., DSR and SNFS, we approximate their training FLOPs via their final distribution, as their sparse distribution converge to the final distribution in the first few epochs. ER distribution causes a bit more inference FLOPs than uniform distribution because is allocates more weights to the RNN layers than other layers. SNFS requires extra FLOPs to calculate dense gradients during the backward pass. Although RigL also uses the dense gradients to assist weight growth, it only needs to calculate dense gradients every ∆T iterations, thus its averaged FLOPs is given by 3fS∆T+2fS+fD ∆T+1 . Here, we simply omit the extra FLOPs required by gradient-based growth, as it is negligible compared with the whole training FLOPs.
For inference, we calculate the inference FLOPs on single sample based on the final sparse distribution learned by different methods.
M FINAL CELL WEIGHT SPARSITY BREAKDOWN
We further study the final sparsity level across cell weights learned automatically by our method. We find a consistent observation that the weight of forget gates, either the forget gate in the standard LSTM or the master forget gate in ON-LSTM, tend to be sparser than the weight of other gates, whereas the weight of cell gates and output gates are denser than the average, as shown in Figure 9. However, there is no big difference between weights in RHN, although the H nonlinear transform weight is slightly sparser than the T gate weight in most RHN layers. This phenomenon is in line with the Ablation analysis where the cell weight redistribution does not provide performance improvement for RHNs. Cell weight redistribution is more important for models with more regulating weights.
N LIMITATION
The aforementioned training benefits have not been fully explored, as off-the-shelf software and hardware have limited support for sparse operations. The unstructured sparsity is difficult to be efficiently mapped to the existing parallel processors. The results of our paper provide motivation for new types of hardware accelerators and libraries with better support for sparse neural networks. Nevertheless, many recent works have been developed to accelerate sparse neural networks including Gray et al. (2017); Moradi et al. (2019); Ma et al. (2019); Yang & Ma (2019); Liu et al. (2020b). For instance, NVIDIA introduces the A100 GPU enabling the Fine-Grained Structured Sparsity (NVIDIA, 2020). The sparse structure is enforced by allowing two nonzero values in every four-entry vector to reduce memory storage and bandwidth by almost 2×. We do not claim that Selfish-RNN is the best way to obtain sparse recurrent neural networks, but simply highlights that it is an important future research direction to develop more efficient hardware and software to benefit from sparse neural networks. | 1. What are the main contributions of the paper regarding sparsity exploration in Recurrent Neural Networks (RNNs) training?
2. What are the strengths and weaknesses of the proposed Selfish-RNN training algorithm and SNT-ASGD optimizer?
3. Do you have any concerns or questions about the paper's claims, particularly regarding the novelty of the approach and the choice of optimizers?
4. How does the reviewer assess the clarity, quality, and reproducibility of the paper's content?
5. Are there any specific aspects of the paper that the reviewer would like the authors to provide more information or explanation on? | Review | Review
In this paper, the authors studied the possibility of sparsity exploration in Recurrent Neural Networks (RNNs) training. The main contributions include two parts: (1) Selfish-RNN training algorithm in Section 3.1 (2) SNT-ASGD optimizer in Section 3.2. The key idea of the Selfish-RNN training algorithm is a non-uniform redistribution across cell weights for better regularization. The authors mentioned previous sparse training techniques mainly focus on Multilayer Perceptron Networks (MLPs) and Convolutional Neural Networks (CNNs) rather than RNNs. This claim seems to be doubtful because one-time SVD + fine-tuning usually works very well for most RNN training applications in the industry.
Overall, this paper is carefully written and provides some interesting empirical results. However, due to the lack of some important information, it is hard to evaluate the contribution of this paper.
Here are some of my questions.
SNT-ASGD needs to save the weights w_i,t from iteration Ti to iteration K, will that cost additional memory?
The authors mentioned that they picked Adam optimizer for SET, DSR, SNFS, and RigL. Is Adam the best optimizer to build a strong baseline? I suspect Adam may not be the best optimizer for each of them.
The authors need to give more information on the hyper-parameters like the learning rate. The selection of hyper-parameters usually significantly affects the convergence/generalization performance of an RNN model. For example, the way of learning rate decay has a big impact on the performance of training Penn TreeBank dataset.
Can the authors report the training epochs and wall-clock time (e.g. in Table 2)? The sparsity typically makes modern hardware like GPUs perform poorly. That may be a concern. That’s the reason why researchers are studying structure sparsity. For future work, an analysis of computation (flops) to communication (memory access frequency) ratio seems to be necessary. |
ICLR | Title
Unsupervised Latent Tree Induction with Deep Inside-Outside Recursive Auto-Encoders
Abstract
Syntax is a powerful abstraction for language understanding. Many downstream tasks require segmenting input text into meaningful constituent chunks (e.g., noun phrases or entities); more generally, models for learning semantic representations of text benefit from integrating syntax in the form of parse trees (e.g., treeLSTMs). Supervised parsers have traditionally been used to obtain these trees, but lately interest has increased in unsupervised methods that induce syntactic representations directly from unlabeled text. To this end, we propose the deep insideoutside recursive autoencoder (DIORA), a fully-unsupervised method for discovering syntax that simultaneously learns representations for constituents within the induced tree. Unlike many prior approaches, DIORA does not rely on supervision from auxiliary downstream tasks and is thus not constrained to particular domains. Furthermore, competing approaches do not learn explicit phrase representations along with tree structures, which limits their applicability to phrase-based tasks. Extensive experiments on unsupervised parsing, segmentation, and phrase clustering demonstrate the efficacy of our method. DIORA achieves the state of the art in unsupervised parsing (46.9 F1) on the benchmark WSJ dataset.
1 INTRODUCTION
Syntax in the form of parse trees is an essential component of many natural language processing tasks. Constituent spans taken from a parse tree are useful for tasks such as relation extraction Verga et al. (2016) and semantic role labeling (Strubell et al., 2018), while the full parse itself can be used to build higher-quality systems for machine translation (Aharoni and Goldberg, 2017) and text classification (Tai et al., 2015). Supervised parsers trained on datasets such as the Penn Treebank (Marcus et al., 1994) are traditionally used to obtain these trees; however, these datasets are generally small and restricted to the newswire domain. For out-of-domain applications, it is generally infeasible to create new treebanks, as syntactic annotation is expensive and time-consuming.
Motivated by these limitations, we propose a method that extracts both shallow parses (i.e., noun phrases or entities) and full syntactic trees from any domain or language automatically without any training data. In addition to just producing the parse, we want our model to build representations for internal constituents that obey syntactic and semantic regularities, as we can then easily inject these representations into downstream tasks. Our model extends existing work on latent tree chart parsers (Le and Zuidema, 2015; Yogatama et al., 2016; Maillard et al., 2017; Choi et al., 2018), which build up representations for all internal nodes in the tree (cells in the chart) generated by a soft weighting over all possible sub-trees (Section 2).
In previous work, the representation at the root node is used as a sentence encoding and trained to optimize some downstream task, typically natural language inference. Unfortunately, this method requires sentence level annotations to train the model. Worse still, analysis on the trees learned by these models show that they are actually quite poor at capturing syntax that in any way resembles linguistic theory (Williams et al., 2018a). To address these issues, we incorporate the inside-outside algorithm (Baker, 1979; Lari and Young, 1990) into a latent tree chart parser. The bottom-up inside step is equivalent to the forward-pass of previous latent tree chart parsers (Maillard et al., 2017). However, these inside representations are encoded by looking only within the current subtree, completely ignoring outside context. Thus, we perform an additional top-down outside calculation for each node in the tree incorporating external context into sub-tree representations. Finally, we train
the outside representations of leaves to reconstruct the initial input, which results in a completely unsupervised autoencoder-like objective.
Recently, Shen et al. (2018) proposed Parsing-Reading-Predict Networks (PRPN), an RNN based language model with an additional module for inferring syntactic distance. After training, this syntax module can be decomposed to recover a parse (Htut et al., 2018) via a complex mechanism that involves modeling a distribution over possible syntactic structures with a stick-breaking process. Like DIORA, this model can be trained in a completely unsupervised manner. However, it has no mechanism of explicitly modeling phrases, and span representations can only be generated by post-hoc heuristics. Additionally, finding the most probable tree in DIORA is much simpler than in PRPN, as we can just run the CKY algorithm.
To probe different properties of our model, we run experiments on unsupervised parsing, segmentation, and phrase representations. DIORA sets the state-of-the-art for unsupervised parsing on the WSJ dataset, has a greater recall on a more constituent types than PRPN, and demonstrates strong clustering of phrase representations.
2 DIORA: DEEP INSIDE-OUTSIDE RECURSIVE AUTO-ENCODER
Our goal is to build an unsupervised model which can automatically discover syntactic structure from raw text. The hypothesis that our model follows is that the most efficient compression of a sentence will be derived from following the true syntactic structure of the underlying input. Our model is an extension of latent tree chart parsers augmented with the inside-outside algorithm (Baker, 1979; Lari and Young, 1990) and trained as an auto-encoder. Based on our hypothesis, the auto-encoder will best reconstruct the input by discovering and exploiting syntactic regularities of the text.
The inside phase of our method recursively compresses the input sequence into a single vector representing the sentence (Section 2.1.1). This is analogous to the compression step of an autoencoder and equivalent to existing latent tree chart parsers forward pass. Following this, we initiate the outside phase of our algorithm there a generic sentence (root) representation which is trained as a part of the model parameter. As an outside step of the inside-outside algorithm (Section 2.1.2), we expand outward until finally producing reconstructed representations of the leaf nodes. These reconstructed leaves are then optimized to reconstruct the input sentence as done in an auto-encoder based deep neural network (Section 2.2).
2.1 FILLING THE CHART WITH INSIDE-OUTSIDE
Each inside representation of a given sub-tree is built considering only the children constituents of that sub-tree, independent of any outside context. After the inside representations are calculated, we do a top-down outside pass to compute outside representations. The outside representations are encoded by looking at the context of a given sub-tree. In the end, each cell in the chart will contain an inside vector, inside compatibility score, outside vector, and outside compatibility score.
Once the chart is filled, each constituent k (cell in the chart) is associated with an inside vector αveck , an outside vector βveck , inside score α score k and outside score β score k .
Assuming that the input to our model is a sentence X made up of T tokens, x0, x1, ..., xT , we describe inside and outside phases of our algorithm in the following Sections 2.1.1 and 2.1.2. Also, each token xi has a corresponding pre-trained d dimensional embedding vector vi.
2.1.1 INSIDE PHASE
For each pair of neighboring constituents i and j, we compute a compatibility score αscorek and a composition vector αveck . The score and vector that represents a particular span k are computed using a soft weighting over all possible pairs of constituents that together covers the span entirely (we refer to this set of constituent pairs as {k}): Vectors for spans of length 1 are initialized using a linear transformation of the embedded input vi. Scores associated with these spans are set to 0.
αveck =Winv T k (1) αscorek = 0 (2)
For higher levels of the chart we use: αveck = ∑
i,j∈{k}
ecompat(i,j)compose(i, j) (3)
αscorek = ∑
i,j∈{k}
ecompat(i,j)compat(i, j) (4)
The compatibility function compat is a bilinear function of the vectors from neighboring spans, adding their scores:
compat(i, j) = αveci S in>αvec>j + α score i + α score j (5)
And the composition function compose is a TreeLSTM (Tai et al., 2015) which produces a hidden state vector h and cell state vector c.:
compose(i, j) = TreeLSTM in(αveci , α vec j ) = [ h c ] (6)
Where the TreeLSTM is defined as follows:
i fi fj u o = σ σ σ σ tanh ( U [ hi hj ]> + b+ 0 ω ω 0 0 )
(7)
c = ci σ(fi) + cj σ(fj) + tanh(u) σ(i) (8) h = σ(o) + tanh(c) (9)
The constant ω is set to 1 for the inside phase and 0 for the outside phase. The parameters U and b are not shared between the inside phase and outside phase.
2.1.2 OUTSIDE PHASE
The outside computation is similar to the inside,
The root node of the outside chart is learned as a bias. Descendant cells are predicted using a disambiguation over the possible outside contexts. Each component of the context consists of a sibling cell from the inside chart and a parent cell from the outside chart.
βveck = ∑
i,j∈{k}
edisamb(i,j)predict(i, j) (10)
βscorek = ∑
i,j∈{k}
edisamb(i,j)disamb(i, j) (11)
disamb(i, j) = βveci S out>αvec>j + β score i + α score j (12)
predict(i, j) = TreeLSTMout(αveci , β vec j ) (13)
2.2 TRAINING OBJECTIVE
To train our model we use an auto-encoder-like language modeling objective. In a standard autoencoder, the input X is compressed into a single lower dimensional representation Y . Y is then decompressed and trained to predict X . In our model, we never condition the reconstruction of X on a single Y because the root’s outside representation is initialized with a bias rather than the root’s own inside vector. Instead, we reconstruct X conditioned on the many sub-tree roots, none of which is a single compression of the entire X , but rather a subset.
Each generated outside vector βveci for constituents of length 1 are trained to predict their original input vi. We approximate a reconstruction loss with a max-margin across N negative samples.
For each xi, we sample N negative xni uniformly at random from the vocabulary. The training objective of our model over a batch B = {XiT i, i = 1, ..., B} is computed identically for all tokens (which are also all spans with length 1) within the batch and averaged to get the overall loss for the entire batch. Precisely, the loss function for each token (span k) is described in Equation 14.
L(B) = n=N∑ n=1 max(0, 1− βveck ∗ αveck + βveck ∗ αveckn ) (14)
In equation 14, αveckn are representations for negative samples from vocabulary. Similar to input transformation, αveckn are also computed after applying a linear transformation over the input embeddings. As mentioned before, αveck and β vec k are inside and outside representations, for span k, respectively.
Algorithm 1 Parsing with DIORA 1: procedure CKY(chart) 2: for each k ∈ chart | size(k) = 1 do . Initialize terminal values. 3: xk ← 0 4: for each k ∈ chart do 5: xk ← max
i,j∈{k} [xi + xj + compat(i, j)] . Calculate a maximum score for each span.
6: bk ← argmax i,j∈{k} [xi + xj + compat(i, j)] . Record a backpointer.
7: procedure FOLLOW-BACKPOINTERS(k) 8: if size(k) = 1 then 9: return k
10: i← FOLLOW-BACKPOINTERS(bik) 11: j ← FOLLOW-BACKPOINTERS(bjk) 12: return (i, j) 13: return FOLLOW-BACKPOINTERS(k = root) . Backtrack to get the maximal tree.
2.3 DIORA CKY PARSING
To obtain a parse with DIORA, we populate an inside and outside chart using the input sentence. Then, we can extract the most likely parse based on our single grammar rule using the CKY procedure (Kasami, 1966; Younger, 1967).
It’s true that using CKY produces the most likely parse given a set of grammar rules, although in the case of DIORA, the single grammar rule is only a weak abstraction for a PCFG. For this reason, including context during CKY might inform our parser to make different decision. We include context by adding the scalar value of the outside cell to each inside cell.
3 EXPERIMENTS
To evaluate the effectiveness of DIORA, we run experiments on unsupervised parsing, unsupervised segmentation, and phrase similarities. The model has been implemented in PyTorch (Team, 2018) and the code is published online1. For implementation details, see the Appendix A.1
Our main baseline is the current state-of-the-art unsupervised parser PRPN(Shen et al., 2018). We compare our model against two size variants of this model which were used in Htut et al. (2018). Comparison of the number of parameters and maximum training sentence length are shown in 1.
3.1 UNSUPERVISED PARSING
We first evaluate how well our model is able to predict a full unlabeled syntactic parse. We look at two data sets which have been used in prior work (Htut et al., 2018), The Wall Street Journal(WSJ) section of Penn Tree Bank (Marcus et al., 1994), and the automatic parses from MultiNLI (Williams et al., 2018b). WSJ has gold human annotated parses and MultiNLI contains automatic parses derived from the Stanford CoreNLP parser (Manning et al., 2014).
1https://github.com/anonymous/submission
We compare our model to left/right branching and balanced trees which are deterministically constructed. RL-SPINN (Yogatama et al., 2016) and ST-Gumbel (Choi et al., 2018) are chart parsing models trained to predict the downstream task of NLI.
3.1.1 RESULTS AND DISCUSSION
Latent tree models have been shown to perform particularly poorly on attachments at the beginning and end of the sequence (Williams et al., 2018a). To address this, we incorporate a post-processing heuristic (+PP in Table 2). We see that PRPN-UP and DIORA benefit much more than PRPN-LM from this heuristic. This is consistent with qualitative analysis showing that DIORA and PRPN-UP incorrectly attach trailing punctuation much more than PRPN-LM. This heuristic simply attaches trailing punctuation to the root of the tree, regardless of its predicted attachment. We find this to be extremely effective, increasing our state-of-the-art WSJ parsing results by by over 3 absolute F1 points.
On the MultiNLI dataset, PRPN-LM is the top performing model without using the PP heuristic and DIORA outperforms PRPN-UP. Afterwards, PRPN-UP surpasses DIORA. However, it is worth noting that this is not actually a gold standard evaluation and instead evaluates the ability to replicate the output of a trained parser Manning et al. (2014).
3.2 UNSUPERVISED PHRASE SEGMENTATION
In many scenarios, rather than a full parse, one is only concerned with extracting particular constituent phrases, such as entities, to be used for downstream analysis. In order to get an idea of how well our model can perform on phrase segmentation, we consider the maximum recall of spans in our predicted parse tree. We leave methods for cutting the tree to future work and instead consider the maximum recall of our model which serves as an upper bound on its performance. We calculate recall as the percentage of labeled constituents that appear in our predicted tree relative the total number of constituents in the gold tree. We separate these scores by type which are presented in Table 3.
3.2.1 RESULTS AND DISCUSSION
In Table 2 we see the breakdown of constituent recall across the 10 most common types. We see that PRPN-UP has the highest recall for the most common type noun-phrase, but drops in every other category. DIORA achieves the highest recall across the most types and is the only model to perform effectively on verb-phrases. However, DIORA performs poorly relative to PRPN at prepositional phrases.
3.3 PHRASE SIMILARITY
One of the goals of DIORA is to learn meaningful representations for spans of text. Most language modeling methods focus only on explicitly modeling token representations and rely on ad-hoc post-processing to generate representations for longer spans, typically relying on simple arithmetic functions of the individual tokens.
To evaluate our model’s learned phrase representations, we look at the similarity between spans of the same type within labeled phrase datasets. We look at two datasets, CoNLL 2000 is a shallow parsing dataset containing spans of noun phrases, verb phrases, etc. CoNLL 2012 is a named entity dataset containing 19 different entity types.
For each of the labeled spans (greater than length 1) in the datasets, we generate a phrase representation and similarities are based on cosine distance. We report three numerical evaluations for both datasets precision@K, mean average precision (MAP), and dendogram purity (DP). We run a hierarchical clustering algorithm over the representations and computeDP (Kobren et al., 2017). Given any two points with the same gold label, the clustering tree is cut to form the minimal cluster containing both points. DP then calculates how pure that cluster is.
The first baseline we compare against produces phrase representations from averaging Glove vectors of the individual tokens within the span. The second uses ELMo (Peters et al., 2018a), a method for obtaining powerful, context dependent word embeddings that has led to many recent state-of-the-art results in NLP. We obtain phrases following the procedure described in (Peters et al., 2018b) and represent phrases as a function of its first and last hidden state. We look at two variants of ELMo3. ELMo-L1 produces token hidden states by only taking the bottom LSTM layer outputs, ELMo-Avg takes a flat average over all of the LSTM hidden state layers4.
3.3.1 RESULTS
On the CoNLL 2000 dataset, we find that our model outperforms Glove and is competitive with ELMo. For CoNLL 2012, an named entity dataset, we find Glove to actually be the top performer
3we use the publically available code and pretrained mdoel from https://allennlp.org/elmo 4Note that we do not run this evaluation for PRPN because the authors released model parameters and
complete parses but not a complete version of the code in order to run on new data.
under some metrics while our model is far behind. These results indicate that DIORA is capturing syntax quite well, but is currently missing semantics.
3.4 QUALITATIVE RESULTS
We show example trees from PRPN-LM and DIORA in 3.
4 RELATED WORK
Latent Tree Learning A brief survey of neural latent tree learning models was covered in Williams et al. (2018a). The first positive result for latent tree was shown in Htut et al. (2018), which used a language modeling objective. The model in Liue et al. (2018) uses an inside chart and an outside procedure to calculate marginal probabilities use to align spans between sentences in entailment.
Neural Inside-Outside Parsers The Inside-Outside Recursive Neural Network (IORNN) in Le and Zuidema (2014) is closest to ours and is a graph-based dependency parser that produces a k-best list of parses, in contrast, DIORA produces the most likely parse given the learned the potential functions of the constituents. The Neural CRF Parser (Durrett and Klein, 2015), similar to DIORA, performs exact inference on the structure of a sentence, although requires a set of grammar rules and labeled parse trees during training. DIORA, like Liue et al. (2018), has a single grammar rule that applies to any pair of constituents and does not use structural supervision.
Unsupervised Parsing and Segmentation Unsupervised segmentation (also called chunking) from raw text dates back to Ponvert et al. (2011). Another paper by the same authors (Ponvert et al., 2010) only looked at parsing certain low-level constituents. Earlier grammar induction models were evaluated against a subset of the WSJ treebank filtered to sentences of length 10 after removing punctuation (Klein and Manning, 2002; 2004) while DIORA is evaluated against two much larger datasets for unsupervised parsing, including the full WSJ treebank. Unsupervised segmentation with across parallel corpora was performed in Das and Petrov (2011). The source language had segment labels, the target language did not, but there are mapped translations between the two languages. Cohen et al. (2011) achieved unsupervised segmentation for parallel corpora without using mapped translations.
5 CONCLUSION
In this work we presented DIORA, a completely unsupervised method for inducing syntactic trees and segmentations over text. We showed that an auto encoder language modeling objective on top of inside-outside representations of latent tree chart parsers allows us to effectively learn syntactic
structure of language. In experiments on unsupervised parsing, chunking, and phrase representations we show our model is comparable to or outperforms current baselines, achieving the state-of-the-art performance on unsupervised parsing for the WSJ dataset. .
Future work can improve the current method by training larger models over much larger corpora including other domains and languages. While the current model seems to focus primarily on syntax, extra unsupervised objectives or light supervision could be injected into the learning procedure to encourage a more thorough capturing of semantics.
A APPENDIX
A.1 TRAINING DETAILS
All DIORA experiments are trained with these settings unless otherwise specified: we use the ALLNLI corpus including only sentences with length less than 20, stochastic gradient descent with a batch size of 256, and the model dimension set to 200. The input sentences are embedded using the 300D 480B GloVe embeddings (Pennington et al., 2014) and are not updated during training. Sentences are grouped into batches with uniform sentence length. Each cell in the chart has its L2norm set to 1. Early stopping is done using the reconstruction objective evaluated on a held-out set. When depending on Noise Contrastive Estimation (as is the case in the reconstruction objective), we sample 3 negative examples per positive example. | 1. What is the main contribution of the paper, and what are the claimed novel aspects of the proposed model?
2. What are the concerns regarding the technical content of the paper, particularly in the training objective and the use of CKY parsing?
3. What are the questions regarding the experimental results, such as the mixed results in Section 3.2 and the choice of tasks in Section 3.3?
4. Are there any issues with the explanations provided in the paper, such as the lack of discussion on certain variables and equations?
5. How convincing are the results in supporting the claims made by the authors, especially in capturing syntactic and semantic regularities? | Review | Review
This paper proposes an unsupervised model for grammar induction by drawing an analogy between a tree-like anto-encoder and the computation from the inside-outside algorithm. The claimed novelty of the proposed model are two-fold: (1) it can extract syntactic information with any supervision or downstream task; (2) it can build representations for internal constituents that obey syntactic and semantic regularities. Although this is an interesting work, the experimental results are not convincing enough to support the claims, especially the second one.
Before giving my comments on the experiments, here are the ones for the technical content
- It was a little surprise to me that the inside vectors in the training objective are only from word level. In this case, the whole training signal seems to be biased to the outside part. Therefore, I am wondering whether the inside part (compat and compose functions) can get sufficiently trained.
- It is interesting to see that this work directly employs the CKY parsing without any justification, since the way of computing scores in the proposed model is not "context-free". Besides, it is not clear to me what the last sentence in section 2.3 means?
About experiments
- In section 3.1, there are two depth columns in table 2, which are not discussed in this paper.
- In section 3.2, is there any explanation about the mixed results? For example, why DIORA performs better then PRPN-UP on VP but worse on NP?
- In section 3.3, (1) why choose the tasks of CoNLL 2000 and CoNLL 2012? (2) how to obtain phrase representations? From inside vectors, outside vectors or their mixtures? (3) results in table 4 seems not convincing about DIORA can capture some syntactic and semantic regularities. Or how should we understand the results?
Additional comments:
- W_in in equation (1), S^in in equation (5) and S^out in equation (12) is left unexplained
- Eq. 6 has a typo |
ICLR | Title
Unsupervised Latent Tree Induction with Deep Inside-Outside Recursive Auto-Encoders
Abstract
Syntax is a powerful abstraction for language understanding. Many downstream tasks require segmenting input text into meaningful constituent chunks (e.g., noun phrases or entities); more generally, models for learning semantic representations of text benefit from integrating syntax in the form of parse trees (e.g., treeLSTMs). Supervised parsers have traditionally been used to obtain these trees, but lately interest has increased in unsupervised methods that induce syntactic representations directly from unlabeled text. To this end, we propose the deep insideoutside recursive autoencoder (DIORA), a fully-unsupervised method for discovering syntax that simultaneously learns representations for constituents within the induced tree. Unlike many prior approaches, DIORA does not rely on supervision from auxiliary downstream tasks and is thus not constrained to particular domains. Furthermore, competing approaches do not learn explicit phrase representations along with tree structures, which limits their applicability to phrase-based tasks. Extensive experiments on unsupervised parsing, segmentation, and phrase clustering demonstrate the efficacy of our method. DIORA achieves the state of the art in unsupervised parsing (46.9 F1) on the benchmark WSJ dataset.
1 INTRODUCTION
Syntax in the form of parse trees is an essential component of many natural language processing tasks. Constituent spans taken from a parse tree are useful for tasks such as relation extraction Verga et al. (2016) and semantic role labeling (Strubell et al., 2018), while the full parse itself can be used to build higher-quality systems for machine translation (Aharoni and Goldberg, 2017) and text classification (Tai et al., 2015). Supervised parsers trained on datasets such as the Penn Treebank (Marcus et al., 1994) are traditionally used to obtain these trees; however, these datasets are generally small and restricted to the newswire domain. For out-of-domain applications, it is generally infeasible to create new treebanks, as syntactic annotation is expensive and time-consuming.
Motivated by these limitations, we propose a method that extracts both shallow parses (i.e., noun phrases or entities) and full syntactic trees from any domain or language automatically without any training data. In addition to just producing the parse, we want our model to build representations for internal constituents that obey syntactic and semantic regularities, as we can then easily inject these representations into downstream tasks. Our model extends existing work on latent tree chart parsers (Le and Zuidema, 2015; Yogatama et al., 2016; Maillard et al., 2017; Choi et al., 2018), which build up representations for all internal nodes in the tree (cells in the chart) generated by a soft weighting over all possible sub-trees (Section 2).
In previous work, the representation at the root node is used as a sentence encoding and trained to optimize some downstream task, typically natural language inference. Unfortunately, this method requires sentence level annotations to train the model. Worse still, analysis on the trees learned by these models show that they are actually quite poor at capturing syntax that in any way resembles linguistic theory (Williams et al., 2018a). To address these issues, we incorporate the inside-outside algorithm (Baker, 1979; Lari and Young, 1990) into a latent tree chart parser. The bottom-up inside step is equivalent to the forward-pass of previous latent tree chart parsers (Maillard et al., 2017). However, these inside representations are encoded by looking only within the current subtree, completely ignoring outside context. Thus, we perform an additional top-down outside calculation for each node in the tree incorporating external context into sub-tree representations. Finally, we train
the outside representations of leaves to reconstruct the initial input, which results in a completely unsupervised autoencoder-like objective.
Recently, Shen et al. (2018) proposed Parsing-Reading-Predict Networks (PRPN), an RNN based language model with an additional module for inferring syntactic distance. After training, this syntax module can be decomposed to recover a parse (Htut et al., 2018) via a complex mechanism that involves modeling a distribution over possible syntactic structures with a stick-breaking process. Like DIORA, this model can be trained in a completely unsupervised manner. However, it has no mechanism of explicitly modeling phrases, and span representations can only be generated by post-hoc heuristics. Additionally, finding the most probable tree in DIORA is much simpler than in PRPN, as we can just run the CKY algorithm.
To probe different properties of our model, we run experiments on unsupervised parsing, segmentation, and phrase representations. DIORA sets the state-of-the-art for unsupervised parsing on the WSJ dataset, has a greater recall on a more constituent types than PRPN, and demonstrates strong clustering of phrase representations.
2 DIORA: DEEP INSIDE-OUTSIDE RECURSIVE AUTO-ENCODER
Our goal is to build an unsupervised model which can automatically discover syntactic structure from raw text. The hypothesis that our model follows is that the most efficient compression of a sentence will be derived from following the true syntactic structure of the underlying input. Our model is an extension of latent tree chart parsers augmented with the inside-outside algorithm (Baker, 1979; Lari and Young, 1990) and trained as an auto-encoder. Based on our hypothesis, the auto-encoder will best reconstruct the input by discovering and exploiting syntactic regularities of the text.
The inside phase of our method recursively compresses the input sequence into a single vector representing the sentence (Section 2.1.1). This is analogous to the compression step of an autoencoder and equivalent to existing latent tree chart parsers forward pass. Following this, we initiate the outside phase of our algorithm there a generic sentence (root) representation which is trained as a part of the model parameter. As an outside step of the inside-outside algorithm (Section 2.1.2), we expand outward until finally producing reconstructed representations of the leaf nodes. These reconstructed leaves are then optimized to reconstruct the input sentence as done in an auto-encoder based deep neural network (Section 2.2).
2.1 FILLING THE CHART WITH INSIDE-OUTSIDE
Each inside representation of a given sub-tree is built considering only the children constituents of that sub-tree, independent of any outside context. After the inside representations are calculated, we do a top-down outside pass to compute outside representations. The outside representations are encoded by looking at the context of a given sub-tree. In the end, each cell in the chart will contain an inside vector, inside compatibility score, outside vector, and outside compatibility score.
Once the chart is filled, each constituent k (cell in the chart) is associated with an inside vector αveck , an outside vector βveck , inside score α score k and outside score β score k .
Assuming that the input to our model is a sentence X made up of T tokens, x0, x1, ..., xT , we describe inside and outside phases of our algorithm in the following Sections 2.1.1 and 2.1.2. Also, each token xi has a corresponding pre-trained d dimensional embedding vector vi.
2.1.1 INSIDE PHASE
For each pair of neighboring constituents i and j, we compute a compatibility score αscorek and a composition vector αveck . The score and vector that represents a particular span k are computed using a soft weighting over all possible pairs of constituents that together covers the span entirely (we refer to this set of constituent pairs as {k}): Vectors for spans of length 1 are initialized using a linear transformation of the embedded input vi. Scores associated with these spans are set to 0.
αveck =Winv T k (1) αscorek = 0 (2)
For higher levels of the chart we use: αveck = ∑
i,j∈{k}
ecompat(i,j)compose(i, j) (3)
αscorek = ∑
i,j∈{k}
ecompat(i,j)compat(i, j) (4)
The compatibility function compat is a bilinear function of the vectors from neighboring spans, adding their scores:
compat(i, j) = αveci S in>αvec>j + α score i + α score j (5)
And the composition function compose is a TreeLSTM (Tai et al., 2015) which produces a hidden state vector h and cell state vector c.:
compose(i, j) = TreeLSTM in(αveci , α vec j ) = [ h c ] (6)
Where the TreeLSTM is defined as follows:
i fi fj u o = σ σ σ σ tanh ( U [ hi hj ]> + b+ 0 ω ω 0 0 )
(7)
c = ci σ(fi) + cj σ(fj) + tanh(u) σ(i) (8) h = σ(o) + tanh(c) (9)
The constant ω is set to 1 for the inside phase and 0 for the outside phase. The parameters U and b are not shared between the inside phase and outside phase.
2.1.2 OUTSIDE PHASE
The outside computation is similar to the inside,
The root node of the outside chart is learned as a bias. Descendant cells are predicted using a disambiguation over the possible outside contexts. Each component of the context consists of a sibling cell from the inside chart and a parent cell from the outside chart.
βveck = ∑
i,j∈{k}
edisamb(i,j)predict(i, j) (10)
βscorek = ∑
i,j∈{k}
edisamb(i,j)disamb(i, j) (11)
disamb(i, j) = βveci S out>αvec>j + β score i + α score j (12)
predict(i, j) = TreeLSTMout(αveci , β vec j ) (13)
2.2 TRAINING OBJECTIVE
To train our model we use an auto-encoder-like language modeling objective. In a standard autoencoder, the input X is compressed into a single lower dimensional representation Y . Y is then decompressed and trained to predict X . In our model, we never condition the reconstruction of X on a single Y because the root’s outside representation is initialized with a bias rather than the root’s own inside vector. Instead, we reconstruct X conditioned on the many sub-tree roots, none of which is a single compression of the entire X , but rather a subset.
Each generated outside vector βveci for constituents of length 1 are trained to predict their original input vi. We approximate a reconstruction loss with a max-margin across N negative samples.
For each xi, we sample N negative xni uniformly at random from the vocabulary. The training objective of our model over a batch B = {XiT i, i = 1, ..., B} is computed identically for all tokens (which are also all spans with length 1) within the batch and averaged to get the overall loss for the entire batch. Precisely, the loss function for each token (span k) is described in Equation 14.
L(B) = n=N∑ n=1 max(0, 1− βveck ∗ αveck + βveck ∗ αveckn ) (14)
In equation 14, αveckn are representations for negative samples from vocabulary. Similar to input transformation, αveckn are also computed after applying a linear transformation over the input embeddings. As mentioned before, αveck and β vec k are inside and outside representations, for span k, respectively.
Algorithm 1 Parsing with DIORA 1: procedure CKY(chart) 2: for each k ∈ chart | size(k) = 1 do . Initialize terminal values. 3: xk ← 0 4: for each k ∈ chart do 5: xk ← max
i,j∈{k} [xi + xj + compat(i, j)] . Calculate a maximum score for each span.
6: bk ← argmax i,j∈{k} [xi + xj + compat(i, j)] . Record a backpointer.
7: procedure FOLLOW-BACKPOINTERS(k) 8: if size(k) = 1 then 9: return k
10: i← FOLLOW-BACKPOINTERS(bik) 11: j ← FOLLOW-BACKPOINTERS(bjk) 12: return (i, j) 13: return FOLLOW-BACKPOINTERS(k = root) . Backtrack to get the maximal tree.
2.3 DIORA CKY PARSING
To obtain a parse with DIORA, we populate an inside and outside chart using the input sentence. Then, we can extract the most likely parse based on our single grammar rule using the CKY procedure (Kasami, 1966; Younger, 1967).
It’s true that using CKY produces the most likely parse given a set of grammar rules, although in the case of DIORA, the single grammar rule is only a weak abstraction for a PCFG. For this reason, including context during CKY might inform our parser to make different decision. We include context by adding the scalar value of the outside cell to each inside cell.
3 EXPERIMENTS
To evaluate the effectiveness of DIORA, we run experiments on unsupervised parsing, unsupervised segmentation, and phrase similarities. The model has been implemented in PyTorch (Team, 2018) and the code is published online1. For implementation details, see the Appendix A.1
Our main baseline is the current state-of-the-art unsupervised parser PRPN(Shen et al., 2018). We compare our model against two size variants of this model which were used in Htut et al. (2018). Comparison of the number of parameters and maximum training sentence length are shown in 1.
3.1 UNSUPERVISED PARSING
We first evaluate how well our model is able to predict a full unlabeled syntactic parse. We look at two data sets which have been used in prior work (Htut et al., 2018), The Wall Street Journal(WSJ) section of Penn Tree Bank (Marcus et al., 1994), and the automatic parses from MultiNLI (Williams et al., 2018b). WSJ has gold human annotated parses and MultiNLI contains automatic parses derived from the Stanford CoreNLP parser (Manning et al., 2014).
1https://github.com/anonymous/submission
We compare our model to left/right branching and balanced trees which are deterministically constructed. RL-SPINN (Yogatama et al., 2016) and ST-Gumbel (Choi et al., 2018) are chart parsing models trained to predict the downstream task of NLI.
3.1.1 RESULTS AND DISCUSSION
Latent tree models have been shown to perform particularly poorly on attachments at the beginning and end of the sequence (Williams et al., 2018a). To address this, we incorporate a post-processing heuristic (+PP in Table 2). We see that PRPN-UP and DIORA benefit much more than PRPN-LM from this heuristic. This is consistent with qualitative analysis showing that DIORA and PRPN-UP incorrectly attach trailing punctuation much more than PRPN-LM. This heuristic simply attaches trailing punctuation to the root of the tree, regardless of its predicted attachment. We find this to be extremely effective, increasing our state-of-the-art WSJ parsing results by by over 3 absolute F1 points.
On the MultiNLI dataset, PRPN-LM is the top performing model without using the PP heuristic and DIORA outperforms PRPN-UP. Afterwards, PRPN-UP surpasses DIORA. However, it is worth noting that this is not actually a gold standard evaluation and instead evaluates the ability to replicate the output of a trained parser Manning et al. (2014).
3.2 UNSUPERVISED PHRASE SEGMENTATION
In many scenarios, rather than a full parse, one is only concerned with extracting particular constituent phrases, such as entities, to be used for downstream analysis. In order to get an idea of how well our model can perform on phrase segmentation, we consider the maximum recall of spans in our predicted parse tree. We leave methods for cutting the tree to future work and instead consider the maximum recall of our model which serves as an upper bound on its performance. We calculate recall as the percentage of labeled constituents that appear in our predicted tree relative the total number of constituents in the gold tree. We separate these scores by type which are presented in Table 3.
3.2.1 RESULTS AND DISCUSSION
In Table 2 we see the breakdown of constituent recall across the 10 most common types. We see that PRPN-UP has the highest recall for the most common type noun-phrase, but drops in every other category. DIORA achieves the highest recall across the most types and is the only model to perform effectively on verb-phrases. However, DIORA performs poorly relative to PRPN at prepositional phrases.
3.3 PHRASE SIMILARITY
One of the goals of DIORA is to learn meaningful representations for spans of text. Most language modeling methods focus only on explicitly modeling token representations and rely on ad-hoc post-processing to generate representations for longer spans, typically relying on simple arithmetic functions of the individual tokens.
To evaluate our model’s learned phrase representations, we look at the similarity between spans of the same type within labeled phrase datasets. We look at two datasets, CoNLL 2000 is a shallow parsing dataset containing spans of noun phrases, verb phrases, etc. CoNLL 2012 is a named entity dataset containing 19 different entity types.
For each of the labeled spans (greater than length 1) in the datasets, we generate a phrase representation and similarities are based on cosine distance. We report three numerical evaluations for both datasets precision@K, mean average precision (MAP), and dendogram purity (DP). We run a hierarchical clustering algorithm over the representations and computeDP (Kobren et al., 2017). Given any two points with the same gold label, the clustering tree is cut to form the minimal cluster containing both points. DP then calculates how pure that cluster is.
The first baseline we compare against produces phrase representations from averaging Glove vectors of the individual tokens within the span. The second uses ELMo (Peters et al., 2018a), a method for obtaining powerful, context dependent word embeddings that has led to many recent state-of-the-art results in NLP. We obtain phrases following the procedure described in (Peters et al., 2018b) and represent phrases as a function of its first and last hidden state. We look at two variants of ELMo3. ELMo-L1 produces token hidden states by only taking the bottom LSTM layer outputs, ELMo-Avg takes a flat average over all of the LSTM hidden state layers4.
3.3.1 RESULTS
On the CoNLL 2000 dataset, we find that our model outperforms Glove and is competitive with ELMo. For CoNLL 2012, an named entity dataset, we find Glove to actually be the top performer
3we use the publically available code and pretrained mdoel from https://allennlp.org/elmo 4Note that we do not run this evaluation for PRPN because the authors released model parameters and
complete parses but not a complete version of the code in order to run on new data.
under some metrics while our model is far behind. These results indicate that DIORA is capturing syntax quite well, but is currently missing semantics.
3.4 QUALITATIVE RESULTS
We show example trees from PRPN-LM and DIORA in 3.
4 RELATED WORK
Latent Tree Learning A brief survey of neural latent tree learning models was covered in Williams et al. (2018a). The first positive result for latent tree was shown in Htut et al. (2018), which used a language modeling objective. The model in Liue et al. (2018) uses an inside chart and an outside procedure to calculate marginal probabilities use to align spans between sentences in entailment.
Neural Inside-Outside Parsers The Inside-Outside Recursive Neural Network (IORNN) in Le and Zuidema (2014) is closest to ours and is a graph-based dependency parser that produces a k-best list of parses, in contrast, DIORA produces the most likely parse given the learned the potential functions of the constituents. The Neural CRF Parser (Durrett and Klein, 2015), similar to DIORA, performs exact inference on the structure of a sentence, although requires a set of grammar rules and labeled parse trees during training. DIORA, like Liue et al. (2018), has a single grammar rule that applies to any pair of constituents and does not use structural supervision.
Unsupervised Parsing and Segmentation Unsupervised segmentation (also called chunking) from raw text dates back to Ponvert et al. (2011). Another paper by the same authors (Ponvert et al., 2010) only looked at parsing certain low-level constituents. Earlier grammar induction models were evaluated against a subset of the WSJ treebank filtered to sentences of length 10 after removing punctuation (Klein and Manning, 2002; 2004) while DIORA is evaluated against two much larger datasets for unsupervised parsing, including the full WSJ treebank. Unsupervised segmentation with across parallel corpora was performed in Das and Petrov (2011). The source language had segment labels, the target language did not, but there are mapped translations between the two languages. Cohen et al. (2011) achieved unsupervised segmentation for parallel corpora without using mapped translations.
5 CONCLUSION
In this work we presented DIORA, a completely unsupervised method for inducing syntactic trees and segmentations over text. We showed that an auto encoder language modeling objective on top of inside-outside representations of latent tree chart parsers allows us to effectively learn syntactic
structure of language. In experiments on unsupervised parsing, chunking, and phrase representations we show our model is comparable to or outperforms current baselines, achieving the state-of-the-art performance on unsupervised parsing for the WSJ dataset. .
Future work can improve the current method by training larger models over much larger corpora including other domains and languages. While the current model seems to focus primarily on syntax, extra unsupervised objectives or light supervision could be injected into the learning procedure to encourage a more thorough capturing of semantics.
A APPENDIX
A.1 TRAINING DETAILS
All DIORA experiments are trained with these settings unless otherwise specified: we use the ALLNLI corpus including only sentences with length less than 20, stochastic gradient descent with a batch size of 256, and the model dimension set to 200. The input sentences are embedded using the 300D 480B GloVe embeddings (Pennington et al., 2014) and are not updated during training. Sentences are grouped into batches with uniform sentence length. Each cell in the chart has its L2norm set to 1. Early stopping is done using the reconstruction objective evaluated on a held-out set. When depending on Noise Contrastive Estimation (as is the case in the reconstruction objective), we sample 3 negative examples per positive example. | 1. What is the focus of the paper, and what are the contributions of the proposed neural latent tree model?
2. What are the strengths of the paper, particularly regarding its novel approach to constituency parsing?
3. What are the weaknesses of the paper, such as the lack of application tasks or comparisons with other models?
4. Are there any concerns about the training and evaluation setup, such as the maximum sentence length, that may impact the performance of the model?
5. How does the reviewer assess the clarity and quality of the paper's content, including the tables, figures, and examples provided?
6. Are there any suggestions for additional discussions or analyses that could enhance the paper's value, such as comparing the model's performance on different sentence lengths or providing more examples of errors made by the model? | Review | Review
This paper describes a neural latent tree model (DIORA) trained with an auto-encoding objective. The proposed model performs an inside-outside pass to construct vector representations for all possible tree nodes. Full constituency trees can be extracted from the model by doing a CKY pass with the internal node-pair scores. It achieves the state of the art on unsupervised constituency parsing. On the other tasks/datasets (unsupervised segmentation, phrase similarity), the model is either on par with or a bit worse than the previous best systems.
Strengths:
- The proposed model is quite interesting. Judging from the empirical results, it captures syntactic structure better than the other latent tree models.
- The paper is well-written and easy to understand.
Weaknesses:
- It would be nice to see at least one task that involves the application of the DIORA model.
Other comments and questions below:
- Table 1 shows that DIORA is trained on sentences with a maximum length of 20, while Section 4 says that the model is evaluated on the full WSJ test set. Is this setup affecting the performance? It would be nice to see some accuracy breakdown by sentence lengths.
- Is it possible to compare to other unsupervised parsing/grammar induction models, despite the fact that they often evaluate on shorter sentences (e.g. 20 or 40 words)?
- This paper reports parsing performance on the MultiNLI dataset against the automatic parser, is there a plan for getting the final accuracy on MultiNLI as well?
- It would be nice to have more discussions in Section 3.4 on the qualitative examples. Moreover, is it possible to show examples where DIORA makes mistakes as well?
In 3.2.1, what are the possible reasons behind DIORA performing poorly on prepositional phrases? |
ICLR | Title
Unsupervised Latent Tree Induction with Deep Inside-Outside Recursive Auto-Encoders
Abstract
Syntax is a powerful abstraction for language understanding. Many downstream tasks require segmenting input text into meaningful constituent chunks (e.g., noun phrases or entities); more generally, models for learning semantic representations of text benefit from integrating syntax in the form of parse trees (e.g., treeLSTMs). Supervised parsers have traditionally been used to obtain these trees, but lately interest has increased in unsupervised methods that induce syntactic representations directly from unlabeled text. To this end, we propose the deep insideoutside recursive autoencoder (DIORA), a fully-unsupervised method for discovering syntax that simultaneously learns representations for constituents within the induced tree. Unlike many prior approaches, DIORA does not rely on supervision from auxiliary downstream tasks and is thus not constrained to particular domains. Furthermore, competing approaches do not learn explicit phrase representations along with tree structures, which limits their applicability to phrase-based tasks. Extensive experiments on unsupervised parsing, segmentation, and phrase clustering demonstrate the efficacy of our method. DIORA achieves the state of the art in unsupervised parsing (46.9 F1) on the benchmark WSJ dataset.
1 INTRODUCTION
Syntax in the form of parse trees is an essential component of many natural language processing tasks. Constituent spans taken from a parse tree are useful for tasks such as relation extraction Verga et al. (2016) and semantic role labeling (Strubell et al., 2018), while the full parse itself can be used to build higher-quality systems for machine translation (Aharoni and Goldberg, 2017) and text classification (Tai et al., 2015). Supervised parsers trained on datasets such as the Penn Treebank (Marcus et al., 1994) are traditionally used to obtain these trees; however, these datasets are generally small and restricted to the newswire domain. For out-of-domain applications, it is generally infeasible to create new treebanks, as syntactic annotation is expensive and time-consuming.
Motivated by these limitations, we propose a method that extracts both shallow parses (i.e., noun phrases or entities) and full syntactic trees from any domain or language automatically without any training data. In addition to just producing the parse, we want our model to build representations for internal constituents that obey syntactic and semantic regularities, as we can then easily inject these representations into downstream tasks. Our model extends existing work on latent tree chart parsers (Le and Zuidema, 2015; Yogatama et al., 2016; Maillard et al., 2017; Choi et al., 2018), which build up representations for all internal nodes in the tree (cells in the chart) generated by a soft weighting over all possible sub-trees (Section 2).
In previous work, the representation at the root node is used as a sentence encoding and trained to optimize some downstream task, typically natural language inference. Unfortunately, this method requires sentence level annotations to train the model. Worse still, analysis on the trees learned by these models show that they are actually quite poor at capturing syntax that in any way resembles linguistic theory (Williams et al., 2018a). To address these issues, we incorporate the inside-outside algorithm (Baker, 1979; Lari and Young, 1990) into a latent tree chart parser. The bottom-up inside step is equivalent to the forward-pass of previous latent tree chart parsers (Maillard et al., 2017). However, these inside representations are encoded by looking only within the current subtree, completely ignoring outside context. Thus, we perform an additional top-down outside calculation for each node in the tree incorporating external context into sub-tree representations. Finally, we train
the outside representations of leaves to reconstruct the initial input, which results in a completely unsupervised autoencoder-like objective.
Recently, Shen et al. (2018) proposed Parsing-Reading-Predict Networks (PRPN), an RNN based language model with an additional module for inferring syntactic distance. After training, this syntax module can be decomposed to recover a parse (Htut et al., 2018) via a complex mechanism that involves modeling a distribution over possible syntactic structures with a stick-breaking process. Like DIORA, this model can be trained in a completely unsupervised manner. However, it has no mechanism of explicitly modeling phrases, and span representations can only be generated by post-hoc heuristics. Additionally, finding the most probable tree in DIORA is much simpler than in PRPN, as we can just run the CKY algorithm.
To probe different properties of our model, we run experiments on unsupervised parsing, segmentation, and phrase representations. DIORA sets the state-of-the-art for unsupervised parsing on the WSJ dataset, has a greater recall on a more constituent types than PRPN, and demonstrates strong clustering of phrase representations.
2 DIORA: DEEP INSIDE-OUTSIDE RECURSIVE AUTO-ENCODER
Our goal is to build an unsupervised model which can automatically discover syntactic structure from raw text. The hypothesis that our model follows is that the most efficient compression of a sentence will be derived from following the true syntactic structure of the underlying input. Our model is an extension of latent tree chart parsers augmented with the inside-outside algorithm (Baker, 1979; Lari and Young, 1990) and trained as an auto-encoder. Based on our hypothesis, the auto-encoder will best reconstruct the input by discovering and exploiting syntactic regularities of the text.
The inside phase of our method recursively compresses the input sequence into a single vector representing the sentence (Section 2.1.1). This is analogous to the compression step of an autoencoder and equivalent to existing latent tree chart parsers forward pass. Following this, we initiate the outside phase of our algorithm there a generic sentence (root) representation which is trained as a part of the model parameter. As an outside step of the inside-outside algorithm (Section 2.1.2), we expand outward until finally producing reconstructed representations of the leaf nodes. These reconstructed leaves are then optimized to reconstruct the input sentence as done in an auto-encoder based deep neural network (Section 2.2).
2.1 FILLING THE CHART WITH INSIDE-OUTSIDE
Each inside representation of a given sub-tree is built considering only the children constituents of that sub-tree, independent of any outside context. After the inside representations are calculated, we do a top-down outside pass to compute outside representations. The outside representations are encoded by looking at the context of a given sub-tree. In the end, each cell in the chart will contain an inside vector, inside compatibility score, outside vector, and outside compatibility score.
Once the chart is filled, each constituent k (cell in the chart) is associated with an inside vector αveck , an outside vector βveck , inside score α score k and outside score β score k .
Assuming that the input to our model is a sentence X made up of T tokens, x0, x1, ..., xT , we describe inside and outside phases of our algorithm in the following Sections 2.1.1 and 2.1.2. Also, each token xi has a corresponding pre-trained d dimensional embedding vector vi.
2.1.1 INSIDE PHASE
For each pair of neighboring constituents i and j, we compute a compatibility score αscorek and a composition vector αveck . The score and vector that represents a particular span k are computed using a soft weighting over all possible pairs of constituents that together covers the span entirely (we refer to this set of constituent pairs as {k}): Vectors for spans of length 1 are initialized using a linear transformation of the embedded input vi. Scores associated with these spans are set to 0.
αveck =Winv T k (1) αscorek = 0 (2)
For higher levels of the chart we use: αveck = ∑
i,j∈{k}
ecompat(i,j)compose(i, j) (3)
αscorek = ∑
i,j∈{k}
ecompat(i,j)compat(i, j) (4)
The compatibility function compat is a bilinear function of the vectors from neighboring spans, adding their scores:
compat(i, j) = αveci S in>αvec>j + α score i + α score j (5)
And the composition function compose is a TreeLSTM (Tai et al., 2015) which produces a hidden state vector h and cell state vector c.:
compose(i, j) = TreeLSTM in(αveci , α vec j ) = [ h c ] (6)
Where the TreeLSTM is defined as follows:
i fi fj u o = σ σ σ σ tanh ( U [ hi hj ]> + b+ 0 ω ω 0 0 )
(7)
c = ci σ(fi) + cj σ(fj) + tanh(u) σ(i) (8) h = σ(o) + tanh(c) (9)
The constant ω is set to 1 for the inside phase and 0 for the outside phase. The parameters U and b are not shared between the inside phase and outside phase.
2.1.2 OUTSIDE PHASE
The outside computation is similar to the inside,
The root node of the outside chart is learned as a bias. Descendant cells are predicted using a disambiguation over the possible outside contexts. Each component of the context consists of a sibling cell from the inside chart and a parent cell from the outside chart.
βveck = ∑
i,j∈{k}
edisamb(i,j)predict(i, j) (10)
βscorek = ∑
i,j∈{k}
edisamb(i,j)disamb(i, j) (11)
disamb(i, j) = βveci S out>αvec>j + β score i + α score j (12)
predict(i, j) = TreeLSTMout(αveci , β vec j ) (13)
2.2 TRAINING OBJECTIVE
To train our model we use an auto-encoder-like language modeling objective. In a standard autoencoder, the input X is compressed into a single lower dimensional representation Y . Y is then decompressed and trained to predict X . In our model, we never condition the reconstruction of X on a single Y because the root’s outside representation is initialized with a bias rather than the root’s own inside vector. Instead, we reconstruct X conditioned on the many sub-tree roots, none of which is a single compression of the entire X , but rather a subset.
Each generated outside vector βveci for constituents of length 1 are trained to predict their original input vi. We approximate a reconstruction loss with a max-margin across N negative samples.
For each xi, we sample N negative xni uniformly at random from the vocabulary. The training objective of our model over a batch B = {XiT i, i = 1, ..., B} is computed identically for all tokens (which are also all spans with length 1) within the batch and averaged to get the overall loss for the entire batch. Precisely, the loss function for each token (span k) is described in Equation 14.
L(B) = n=N∑ n=1 max(0, 1− βveck ∗ αveck + βveck ∗ αveckn ) (14)
In equation 14, αveckn are representations for negative samples from vocabulary. Similar to input transformation, αveckn are also computed after applying a linear transformation over the input embeddings. As mentioned before, αveck and β vec k are inside and outside representations, for span k, respectively.
Algorithm 1 Parsing with DIORA 1: procedure CKY(chart) 2: for each k ∈ chart | size(k) = 1 do . Initialize terminal values. 3: xk ← 0 4: for each k ∈ chart do 5: xk ← max
i,j∈{k} [xi + xj + compat(i, j)] . Calculate a maximum score for each span.
6: bk ← argmax i,j∈{k} [xi + xj + compat(i, j)] . Record a backpointer.
7: procedure FOLLOW-BACKPOINTERS(k) 8: if size(k) = 1 then 9: return k
10: i← FOLLOW-BACKPOINTERS(bik) 11: j ← FOLLOW-BACKPOINTERS(bjk) 12: return (i, j) 13: return FOLLOW-BACKPOINTERS(k = root) . Backtrack to get the maximal tree.
2.3 DIORA CKY PARSING
To obtain a parse with DIORA, we populate an inside and outside chart using the input sentence. Then, we can extract the most likely parse based on our single grammar rule using the CKY procedure (Kasami, 1966; Younger, 1967).
It’s true that using CKY produces the most likely parse given a set of grammar rules, although in the case of DIORA, the single grammar rule is only a weak abstraction for a PCFG. For this reason, including context during CKY might inform our parser to make different decision. We include context by adding the scalar value of the outside cell to each inside cell.
3 EXPERIMENTS
To evaluate the effectiveness of DIORA, we run experiments on unsupervised parsing, unsupervised segmentation, and phrase similarities. The model has been implemented in PyTorch (Team, 2018) and the code is published online1. For implementation details, see the Appendix A.1
Our main baseline is the current state-of-the-art unsupervised parser PRPN(Shen et al., 2018). We compare our model against two size variants of this model which were used in Htut et al. (2018). Comparison of the number of parameters and maximum training sentence length are shown in 1.
3.1 UNSUPERVISED PARSING
We first evaluate how well our model is able to predict a full unlabeled syntactic parse. We look at two data sets which have been used in prior work (Htut et al., 2018), The Wall Street Journal(WSJ) section of Penn Tree Bank (Marcus et al., 1994), and the automatic parses from MultiNLI (Williams et al., 2018b). WSJ has gold human annotated parses and MultiNLI contains automatic parses derived from the Stanford CoreNLP parser (Manning et al., 2014).
1https://github.com/anonymous/submission
We compare our model to left/right branching and balanced trees which are deterministically constructed. RL-SPINN (Yogatama et al., 2016) and ST-Gumbel (Choi et al., 2018) are chart parsing models trained to predict the downstream task of NLI.
3.1.1 RESULTS AND DISCUSSION
Latent tree models have been shown to perform particularly poorly on attachments at the beginning and end of the sequence (Williams et al., 2018a). To address this, we incorporate a post-processing heuristic (+PP in Table 2). We see that PRPN-UP and DIORA benefit much more than PRPN-LM from this heuristic. This is consistent with qualitative analysis showing that DIORA and PRPN-UP incorrectly attach trailing punctuation much more than PRPN-LM. This heuristic simply attaches trailing punctuation to the root of the tree, regardless of its predicted attachment. We find this to be extremely effective, increasing our state-of-the-art WSJ parsing results by by over 3 absolute F1 points.
On the MultiNLI dataset, PRPN-LM is the top performing model without using the PP heuristic and DIORA outperforms PRPN-UP. Afterwards, PRPN-UP surpasses DIORA. However, it is worth noting that this is not actually a gold standard evaluation and instead evaluates the ability to replicate the output of a trained parser Manning et al. (2014).
3.2 UNSUPERVISED PHRASE SEGMENTATION
In many scenarios, rather than a full parse, one is only concerned with extracting particular constituent phrases, such as entities, to be used for downstream analysis. In order to get an idea of how well our model can perform on phrase segmentation, we consider the maximum recall of spans in our predicted parse tree. We leave methods for cutting the tree to future work and instead consider the maximum recall of our model which serves as an upper bound on its performance. We calculate recall as the percentage of labeled constituents that appear in our predicted tree relative the total number of constituents in the gold tree. We separate these scores by type which are presented in Table 3.
3.2.1 RESULTS AND DISCUSSION
In Table 2 we see the breakdown of constituent recall across the 10 most common types. We see that PRPN-UP has the highest recall for the most common type noun-phrase, but drops in every other category. DIORA achieves the highest recall across the most types and is the only model to perform effectively on verb-phrases. However, DIORA performs poorly relative to PRPN at prepositional phrases.
3.3 PHRASE SIMILARITY
One of the goals of DIORA is to learn meaningful representations for spans of text. Most language modeling methods focus only on explicitly modeling token representations and rely on ad-hoc post-processing to generate representations for longer spans, typically relying on simple arithmetic functions of the individual tokens.
To evaluate our model’s learned phrase representations, we look at the similarity between spans of the same type within labeled phrase datasets. We look at two datasets, CoNLL 2000 is a shallow parsing dataset containing spans of noun phrases, verb phrases, etc. CoNLL 2012 is a named entity dataset containing 19 different entity types.
For each of the labeled spans (greater than length 1) in the datasets, we generate a phrase representation and similarities are based on cosine distance. We report three numerical evaluations for both datasets precision@K, mean average precision (MAP), and dendogram purity (DP). We run a hierarchical clustering algorithm over the representations and computeDP (Kobren et al., 2017). Given any two points with the same gold label, the clustering tree is cut to form the minimal cluster containing both points. DP then calculates how pure that cluster is.
The first baseline we compare against produces phrase representations from averaging Glove vectors of the individual tokens within the span. The second uses ELMo (Peters et al., 2018a), a method for obtaining powerful, context dependent word embeddings that has led to many recent state-of-the-art results in NLP. We obtain phrases following the procedure described in (Peters et al., 2018b) and represent phrases as a function of its first and last hidden state. We look at two variants of ELMo3. ELMo-L1 produces token hidden states by only taking the bottom LSTM layer outputs, ELMo-Avg takes a flat average over all of the LSTM hidden state layers4.
3.3.1 RESULTS
On the CoNLL 2000 dataset, we find that our model outperforms Glove and is competitive with ELMo. For CoNLL 2012, an named entity dataset, we find Glove to actually be the top performer
3we use the publically available code and pretrained mdoel from https://allennlp.org/elmo 4Note that we do not run this evaluation for PRPN because the authors released model parameters and
complete parses but not a complete version of the code in order to run on new data.
under some metrics while our model is far behind. These results indicate that DIORA is capturing syntax quite well, but is currently missing semantics.
3.4 QUALITATIVE RESULTS
We show example trees from PRPN-LM and DIORA in 3.
4 RELATED WORK
Latent Tree Learning A brief survey of neural latent tree learning models was covered in Williams et al. (2018a). The first positive result for latent tree was shown in Htut et al. (2018), which used a language modeling objective. The model in Liue et al. (2018) uses an inside chart and an outside procedure to calculate marginal probabilities use to align spans between sentences in entailment.
Neural Inside-Outside Parsers The Inside-Outside Recursive Neural Network (IORNN) in Le and Zuidema (2014) is closest to ours and is a graph-based dependency parser that produces a k-best list of parses, in contrast, DIORA produces the most likely parse given the learned the potential functions of the constituents. The Neural CRF Parser (Durrett and Klein, 2015), similar to DIORA, performs exact inference on the structure of a sentence, although requires a set of grammar rules and labeled parse trees during training. DIORA, like Liue et al. (2018), has a single grammar rule that applies to any pair of constituents and does not use structural supervision.
Unsupervised Parsing and Segmentation Unsupervised segmentation (also called chunking) from raw text dates back to Ponvert et al. (2011). Another paper by the same authors (Ponvert et al., 2010) only looked at parsing certain low-level constituents. Earlier grammar induction models were evaluated against a subset of the WSJ treebank filtered to sentences of length 10 after removing punctuation (Klein and Manning, 2002; 2004) while DIORA is evaluated against two much larger datasets for unsupervised parsing, including the full WSJ treebank. Unsupervised segmentation with across parallel corpora was performed in Das and Petrov (2011). The source language had segment labels, the target language did not, but there are mapped translations between the two languages. Cohen et al. (2011) achieved unsupervised segmentation for parallel corpora without using mapped translations.
5 CONCLUSION
In this work we presented DIORA, a completely unsupervised method for inducing syntactic trees and segmentations over text. We showed that an auto encoder language modeling objective on top of inside-outside representations of latent tree chart parsers allows us to effectively learn syntactic
structure of language. In experiments on unsupervised parsing, chunking, and phrase representations we show our model is comparable to or outperforms current baselines, achieving the state-of-the-art performance on unsupervised parsing for the WSJ dataset. .
Future work can improve the current method by training larger models over much larger corpora including other domains and languages. While the current model seems to focus primarily on syntax, extra unsupervised objectives or light supervision could be injected into the learning procedure to encourage a more thorough capturing of semantics.
A APPENDIX
A.1 TRAINING DETAILS
All DIORA experiments are trained with these settings unless otherwise specified: we use the ALLNLI corpus including only sentences with length less than 20, stochastic gradient descent with a batch size of 256, and the model dimension set to 200. The input sentences are embedded using the 300D 480B GloVe embeddings (Pennington et al., 2014) and are not updated during training. Sentences are grouped into batches with uniform sentence length. Each cell in the chart has its L2norm set to 1. Early stopping is done using the reconstruction objective evaluated on a held-out set. When depending on Noise Contrastive Estimation (as is the case in the reconstruction objective), we sample 3 negative examples per positive example. | 1. What is the main contribution of the paper regarding unsupervised dependency parsing?
2. What are the strengths and weaknesses of the proposed model, particularly in its novelty and relevance to previous works?
3. How does the reviewer assess the significance of the paper's claims, especially regarding its comparison with other works in the field?
4. What are some of the specific works that the reviewer suggests the authors should have considered or cited in their paper?
5. Are there any questions or concerns regarding the paper's content, such as its clarity, quality, reproducibility, or limitations? | Review | Review
This paper proposes a model for unsupervised dependency parsing (latent tree induction) that is based on a combination of the inside-outside algorithm with neural modeling (recursive auto-encoders). The paper is clearly written and the model seems interesting and, as far as I can tell, also novel.
Yet, the paper suffers from a major limitation that deems its rejection. Unfortunately, the authors seem to be totally unaware of previous work on unsupervised parsing. It may sound surprising but people have worked on this problem even before 2014 and there are some strong results that do not involve deep learning. I am sorry for the sarcasm, but it was really frustrating to see not only the lack of citation, but also the statement (already in the abstract and throughout the paper) that an F-score of 46.9 set a SOTA for unsupervised parsing with WSJ PennTreebank.
Particularly, the authors ignore works of Cohen and Smith, Spitkovsy et al., Seginer and many others. A quick look at Spitkosky et al., 2013 (EMNLP) would reveal that the reported result is not SOTA (although Spitkovsky et al report results for sentences no longer than 40 words, but given the numbers they report for several models - an F1 score of 54 and more - the full WSJ number is likely to be higher than 46.9). But all the papers cited at Spitkovsky et al. 2013 are even not cited here.
I would also like to refer the authors to work by Cohen and Collins (2012-2013) on latent variable PCFG, which presents a provably consistent parameter estimation for the problem. The presented techniques may also be an interesting point of comparison to this work and phrase representation may also be extracted from that algorithm. |
ICLR | Title
Policy Optimization In the Face of Uncertainty
Abstract
Model-based reinforcement learning has the potential to be more sample efficient than model-free approaches. However, existing model-based methods are vulnerable to model bias, which leads to poor generalization and asymptotic performance compared to model-free counterparts. In this paper, we propose a novel policy optimization framework using an uncertainty-aware objective function to handle those issues. In this framework, the agent simultaneously learns an uncertaintyaware dynamics model and optimizes the policy according to these learned models. Under this framework, the objective function can represented end-to-end as a single computational graph, which allows seamless policy gradient computation via backpropagation through the models. In addition to being theoretically sound, our approach shows promising results on challenging continuous control benchmarks with competitive asymptotic performance and sample complexity compared to state-of-the-art baselines.
N/A
Model-based reinforcement learning has the potential to be more sample efficient than model-free approaches. However, existing model-based methods are vulnerable to model bias, which leads to poor generalization and asymptotic performance compared to model-free counterparts. In this paper, we propose a novel policy optimization framework using an uncertainty-aware objective function to handle those issues. In this framework, the agent simultaneously learns an uncertaintyaware dynamics model and optimizes the policy according to these learned models. Under this framework, the objective function can represented end-to-end as a single computational graph, which allows seamless policy gradient computation via backpropagation through the models. In addition to being theoretically sound, our approach shows promising results on challenging continuous control benchmarks with competitive asymptotic performance and sample complexity compared to state-of-the-art baselines.
1 INTRODUCTION
Popular reinforcement learning (RL) algorithms are divided into two main paradigms: model-free (MFRL) and model-based (MBRL) types. While achieving good asymtotic performances in many high dimensional problems (Mnih et al., 2015; Silver et al., 2017; Schulman et al., 2017; Hessel et al., 2018; Espeholt et al., 2018), MFRL methods suffer from high sample complexity since they learn state/state-action values only from rewards and do not explicitly exploit the rich information underlying the transition dynamics data. On the contrary, MBRL approaches, by trying model the transition dynamics that are in turn used for planning without having to frequently interacting with real systems, are known to have sample efficiency and thus possess more practicability (Deisenroth et al., 2013; Finn et al., 2016; Ebert et al., 2018; Sutton & Barto, 2018; Kaiser et al., 2019).
Current MBRL methods, however, still have limitations because the accuracy of the learned dynamics model is usually not satisfied, especially in complex environments (Zhang et al., 2018; Lowrey et al., 2018). The model error and its compounding effect when planning, i.e. a small bias in the model can lead to a highly erroneous value function estimate and a strongly-biased suboptimal policy, make MBRL less competitive in terms of asymptotic performance than MFRL for many non-trivial tasks. Numerous attempts have been made to tackle with this model bias problem but none of them have been really successful, such as using Gaussian Process (GP) (Deisenroth & Rasmussen, 2011; Gal & Ghahramani, 2016), Bayesian Neural Networks (Gal et al., 2016; Depeweg et al., 2016a; Kamthe & Deisenroth, 2017), and Emsembling (Kurutach et al., 2018; Clavera et al., 2018).
Another limitation of many existing MBRL methods is that they rely on the model predictive control (MPC) framework (Garcia et al., 1989). While being commonly used, MPC has serveral drawbacks (Atkeson & Schaal, 1997; Thananjeyan et al., 2019). First, each step requires solving a highdimensional optimization problem and thus is computationally prohibitive for applications requiring either real-time or low-latency reaction such as autonomous driving. Second, the policy is only implicit via solving the mentioned optimization problem. Not being able to explicitly represent the policy makes it hard to transfer the learned policy to other tasks or to initialize agents with an existing better-than-random policy.
Contributions. To address those challenges of MBRL, we propose a new framework called Policy Optimization with Uncertainty-aware Model (POUM) that is able to optimize in the face of uncertainty. Our policy optimization is based on Policy Gradient, which has been widely adopted in MFRL (Lillicrap et al., 2015; Schulman et al., 2017; Haarnoja et al., 2018). However, in POUM, the objective function, a utility function, is formulated around the uncertainty-aware dynamics model. This utility function takes into account both the mean and the variance of the value function estimate. This helps reducing the model bias while effectively approximating true objective, which is the value function of the policy. For experiments, we demonstrate the advantages of POUM over state-of-the-art (SoTA) methods on various RL tasks given training from scratch and all the environments are unaltered, and also investigate on how much risk is tolerable in those tasks. And last, POUM can be represented end-to-end in a single computation graph, which greatly facilitates the training.
2 RELATED WORK
Traditional MBRL. Initial successes of MBRL in continuous control achieved promising results by learning control policies trained on models of local dynamics using linear parametric approximators (Abbeel et al., 2007; Levine & Koltun, 2013). Alternative methods such as Deisenroth & Rasmussen (2011); Levine & Koltun (2013) incorporated non-parametric probabilistic GPs to capture model uncertainty during policy planning and evaluation. While these methods enhance data efficiency in low-dimensional tasks, their applications in more challenging domains such as environments involving non-contact dynamics and high-dimensional control remain limited by the inflexibility of their temporally local structure and intractable inference time. Our approach, on the contrary, pushes the uncertainty modeling to the objective function and not anywhere else in the architecture. Plus, the fact that this objective is designed to propagate all the way to the value function makes it versatile in capturing uncertainty. What is more, all core components are constructed by neural networks gives our solution more power in dealing with high-dimensional tasks, thus acquiring asymptotically high performance compared to MFRL methods and, at the same time, retaining data efficiency in those complex domains.
Deep Neural Networks (DNNs). Recently, there has been a revived interest in using DNNs to learn predictive models of environments from data, drawing inspiration from ideas in the early literature on this MBRL field, mainly because the large representational capacity enables them as suitable function approximators for complex environments, especially that involve images or videos (Ebert et al., 2018; Kaiser et al., 2019). However, additional care has to be usually taken to avoid model bias, a situation where the DNNs overfit in the early stages of learning, resulting in inaccurate models. For example, Depeweg et al. (2016b) modeled a Bayesian type of DNNs to capture uncertainty in transition dynamics. In another approach, Nagabandi et al. (2017) combined a learned dynamics network with MPC to initialize the policy network to accelerate learning in model-free deep RL. Chua et al. (2018) extended this idea by introducing a bootstrapped ensemble of probabilistic DNNs to model predictive uncertainty of the learned networks and demonstrating that a pure model-based approach can attain the asymptotic performance of MFRL counterparts. However, the use of MPC to define a policy leads to poor run-time execution and hard to transfer policy across tasks. On the contrary, our framework is much simpler in that we do not employ any extra method to model the dynamics uncertainty into DNNs that are already complicated itself with numerous architectures and hyperparameters, but instead formulate a single, new uncertainty-aware objective for end-to-end optimization.
Ensemble. Another group of work leveraged the learned ensemble of dynamics models to train a policy network. Kurutach et al. (2018) learned a stochastic policy via trust-region policy optimization, and Clavera et al. (2018) casted the policy gradient as a meta-learning adaptation step with respect to each member of the ensemble. Buckman et al. (2018) proposed an algorithm to learn a weighted combination of roll-outs of different horizon lengths, which dynamically interpolates between model-based and model-free learning based on the uncertainty in the model predictions. To our knowledge, this is the closest work in aside from ours, which learns a reward function in addition to the dynamics function. But none of the aforementioned work propagates the uncertainty all the way to the value function and uses the concept of utility function to balance risk and return as in our model.
Finally, ensemble of DNNs also provide a straightforward technique to obtain reliable estimates of predictive uncertainty (Lakshminarayanan et al., 2017) and has been integrated with bootstrap to guide exploration in MFRL (Osband et al., 2016; Janner et al., 2019). While many of the approaches mentioned in this section employ bootstrap to train an ensemble of models, we note that their implementations comprise of reconstructing bootstrap datasets at every training iteration, which effectively trains every single data sample and thus diminishes the advantage on uncertainty quantification achieved through bootstrapping. Except for a novel objective formulation, our model is different in that, to maintain online bootstrapped datasets across ensembles, it adds each incoming data sample to a dataset according to a Poisson probability distribution (Park et al., 2007; Qin et al., 2013), thereby guaranteeing asymptotically consistent those datasets.
3 UNCERTAINTY-AWARE MODEL-BASED POLICY OPTIMIZATION
3.1 BACKGROUND
Consider a discrete-time Markov Decision Process (MDP) defined by a tuple M = {S,A, f, r, γ}, in which S is a state space, A is an action space, f : S ×A→ S is a deterministic (or probabilistic) transition function, r : S × A → R is a deterministic reward function, and γ ∈ (0, 1) is a discount factor. We define the return as sum of the rewards r (st, at) = r (st, π(st)) for t = 0, . . . , T for the whole trajectory (s0, a0, ..., sT , aT ) induced by a policy π : S → A and discounted by γ. Here T ∈ Z+ is a task horizon, which may take a value of∞ for non-episodic environments. The goal of RL is to find an optimal policy π? to maximize the expected return
J(π) = E s0∼S
[V π(s0)] (1)
where the value function is defined as V π(s0) = ∑T−1 t=0 γ
t r(st, π(st)), and the state transition is st+1 = f(st, π(st)), with s0 being randomly chosen from the distribution of s ∈ S. Then if the dynamics function f and the reward function r are given, solving Equation 1 can be done using the Calculus of Variations (Young, 2000) or using Policy Gradient (Sutton et al., 2000) when the control function is parameterized or is finite dimensional.
In RL, however, f and r are often unknown and hence Equation 1 becomes a blackbox optimization problem with an unknown objective function. Following the Bayesian approach commonly used in the blackbox optimization literature (Shahriari et al., 2015), we propose to solve this problem by iteratively learning a probabilistic estimate V̂ of V from data and optimizing the policy according to this approximate model, as detailed in the next section.
3.2 FORMULATION OF UNCERTAINTY-AWARE OPTIMIZATION OBJECTIVE
It is worth noting that any unbiased method would model V̂ (π) as a probabilistic estimate, i.e. V̂ (π) would be a distribution (as opposed to a point estimate) for a given policy π. Optimizing a stochastic objective is, however, not well-defined. Our solution is to transform V̂ into a deterministic utility function that reflects a subjective measure balancing the risk and return. Following Markowitz (1952); Sato et al. (2001); Garcıa & Fernández (2015), we propose a risk-sensitive objective criterion using a linear combination of the mean and the standard deviation of V̂ (π). Formally stated, our objective criterion, which we also call the utility function, now becomes
U(π)(s0) = E s0∼S
[ µ ( V̂ (π)(s0) ) + c× σ ( V̂ (π)(s0) )] , (2)
where µ and σ are respectively the mean and the standard deviation of V̂ (π)(s0), and c is a constant that represents the subjective risk preference of the learning agent. A positive risk preference infers that the agent is adventurous while a negative risk preference indicates that the agent has a safe exploration strategy. To our best knowledge, this uncertainty-aware model-based objective function has not been used in the RL literature.
3.3 EMPIRICAL ESTIMATE OF VALUE FUNCTION
Section 3.2 provides a general framework for policy optimization under uncertainty, assuming the availability of the estimation model V̂ (π) of the true value function V (π). In this section, we
describe how to estimate V̂ (π) with a model-based approach. The main idea is to approximate the functions {f, r} with probabilistic parametric models {f̂ , r̂} and fully propagate the estimated uncertainty when planning under each policy π from an initial state s0. The value function estimate V̂ can be formulated as
V̂ (π)(s0) = T−1∑ t=0 γtr̂ (ŝt, π(ŝt)) , (3)
where ŝ0 = s0 and ŝt+1 = f̂(ŝt, π(ŝt)) for t = 0, . . . , T − 1. Next, we describe how to efficiently model {f, r} with well-calibrated uncertainty and a rollout technique that allows the uncertainty to be faithfully propagated into V̂ (π).
3.3.1 BOOTSTRAP SETUP FOR MODEL LEARNING
Following the traditional bootstrap methodology, the empirical model function f̂ is represented as {f̂φk(st, at) → st+1}Bk=1. For simplicity of implementation, we model each bootstrap replica as deterministic and rely on the ensemble as the sole mechanism for quantifying and propagating uncertainty. Each bootstrapped model f̂φk , which is parameterized by φk, learns to minimize the L2 one-step prediction loss over the respective bootstrapped dataset Dk:
min φk:=17→B
E(st,at,st+1)∼Dk‖st+1 − f̂φk(st, at)‖ 2 2. (4)
The training dataset D, from which the bootstrapped datasets {Dk}Bk=1 are sampled, stores the transitions on which the agent has experienced. Since each model observes its own subset of the real data samples, the predictions across the ensemble remain sufficiently diverse in the early stages of the learning and will then converge to their true values as the error of the individual networks decreases.
In addition to model estimation and unlike many other model-based approaches, we also learn the reward function along the same design of classical MBRL algorithms Sutton (1991). But in POUM, we use a deterministic model (also parameterized by a DNN) for the reward function to simplify the policy evaluation.
3.3.2 BOOTSTRAP ROLLOUT
In this section, we describe how to propagate the estimates with uncertainty from the dynamics model to evaluate a policy π. We represent our policy πθ : S → A as a neural network parameterized by θ . Note that we choose to represent our policy as deterministic. We argue that while all estimation models, including that of the dynamics and of the value function, need to be stochastic (i.e. uncertainty-aware), the policy does not need to be. The policy is not an estimator and deterministic policy simply means that the agent is consistent when taking an action, no matter how uncertain it may know about the world.
Given a deterministic policy πθ and an initial state s0 ∈ D, we can estimate the distribution of V (π)(s0) by simulating πθ through each each bootstrapped dynamics model. And since each bootstrap model is an independent approximator of the dynamics function, by expanding the value function via these dynamics approximators, we eventually obtain independent estimates of that value function. Finally, those separate and independent trajectories collectively form an ensemble estimator of V .
In practice, we sample these trajectories with a finite horizon H < T . It is still a challenge to expand the value function estimation for a very long horizon due to a few reasons. First, DNNs training becomes harder when the depth increases. Second, despite our best effort to control the uncertainty, we still do not have a guarantee that our uncertainty modeling is perfectly calibrated, which in turn may be problematic if the planning horizon is too large. Finally, policy learning time is proportional to the rollout horizon.
3.4 POLICY GRADIENT
Based on Equation 2, the optimization target to optimize based on policy gradient method is:
argmax θ
J(θ) = Es∼S [Uθ(s)] , (5)
where Uθ(s) = µ(V̂θ(s)) + c × σ(V̂θ(s)). Using the ensemble method and the rollout technique described above, we can naturally compute µ(V̂θ(s)) and σ(V̂θ(s)) for a given policy πθ and for a given state s. Therefore, the policy πθ can be updated using the SGD or a variance of it.
Importantly, in terms of implementation, it is worth noting that the aforementioned rollout method also allows for easily expressing U(θ) in Equation 5 as a single computational graph of θ. This makes it straightforward to compute the policy gradient ∇θUθ(s) using automatic differentiation, a feature provided by default in most popular deep learning toolkits.
4 ALGORITHM SUMMARY
Algorithm 1 Policy Optimization with Uncertainty-aware Model (POUM)
1: Initialize a training dataset D, bootstrapped datasets {Di}Bi=1, parameterized bootstrapped models {f̂i}Bi=1, a parameterized reward model r̂φ, and a parameterized deterministic policy πθ. 2: while not done do 3: • Step in the environment, collect new data point (s, a, s′, r) and push into D, 4: • Sample from D and push data into the bootstrapped replay buffers: for each member ith in the ensemble, add zi ∼ Poisson(1) copies of that data point to Di, 5: • Update {f̂i}Bi=1 on Di and r̂φ on D using SGD, 6: • Evaluate V̂θ(s) and Uθ(s) by simulating through the learned models {f̂i}Bi=1 and r̂φ, 7: • Update πθ using SGD with the policy gradient being backpropagated on Es [Uθ(s)] through the learned models. 8: end while
We summarize our framework POUM in Algorithm 1 and later in this section, we will also highlight some important details in our implementation.
4.1 DYNAMICS MODEL LEARNING WITH ONLINE BOOTSTRAP
As discussed in Section 1, there are several prior attempts to learn uncertainty-aware dynamics models such as GPs, Bayesian neural networks (NNs), dropout NNs and ensemble of NNs. In this work, however, we employ an ensemble of bootstrapped DNNs. Bootstrap is a generic, principled and statistical approach for uncertainty quantification. Furthermore, as will be also later explained in Section 3.4, this ensembling approach also gives rise to easy gradient computation.
4.1.1 ONLINE BOOTSTRAP FOR TRAINING DATA
Bootstrap learning is often studied in the context of batch learning. However, since our agent updates its empirical model F̂ after each physical step for the best possible sample efficiency, we follow an online bootstrapping method by sampling from Poisson distribution (Oza, 2005; Qin et al., 2013). This is a very effective online approximation to batch bootstrapping, and can be easily done by this simple rule: bootstrapping a dataset D with n examples means sampling n examples from D with replacement. In detail, each example i will appear zi times in the bootstrapped sample where zi is a random variable whose distribution is Binom(n, 1/n) because during resampling, the ith example will have n chances to be picked, each with probability 1/n. This Binom(n, 1/n) distribution converges to Poisson(1) when n → ∞. Therefore, for each new data point, this method adds zk copies of that data point to the bootstrapped dataset Dk, where zk is sampled from a Poisson(1).
Online off-policy learning. Except for the initialization step (we may initialize the models with batch training from off-policy data), our model learning is an online learning process. For each time
step, the learning cost stays constant and does not grow over time, which is required for lifelong learning. Despite being online, the learning is off-policy because we maintain a bootstrapped replay buffer for each model in the ensemble. For each model update, we sample a minibatch of training data from the respective replay buffer. In addition, as mentioned, the models can also be initialized from existing data even before the policy optimization starts.
4.1.2 LINEARLY-WEIGHTED SAMPLING FROM BOOTSTRAPPED TRAINING DATA
Since our replay buffers are accumulated online, a naive uniformly sampling strategy would lead to early data being sampled more frequently than the later ones. We thus propose a linearly weighted random sampling scheme to mitigate this early-data bias issue. In this sampling scheme, example ith is randomly sampled with weight i, i.e. higher weights for the fresher examples in each online update step. Despite its simplicity, this scheme plays an important role in data bias removal, as shown in Appendix A.1.
5 EXPERIMENT
Our experiments are designed to help 1) compare our POUM framework with other SoTA approaches and 2) investigate the impact of the risk factor in our utility function on guiding agents.
5.1 COMPARISON TO BASELINE ALGORITHMS
Experimental Design. We evaluate the performance of our POUM algorithm on four continuous control tasks including: one classic control task (Pendulum-v0) and three other tasks in the MuJoCo simulator (Todorov et al., 2012) from OpenAI Gym (Brockman et al., 2016). It is important to note that, we keep the default configurations prodived by OpenAI Gym (See Appendix A.2.1) and also does not assume access to the reward function as some recent works in model-based reinforcement learning (Chua et al., 2018; Clavera et al., 2018; Kurutach et al., 2018).
For the baselines, we compare POUM to the following SoTA algorithms designed for continuous control: MBPO (Janner et al., 2019), DDPG (Lillicrap et al., 2015), SAC (Haarnoja et al., 2018), STEVE (Buckman et al., 2018). For each one of them, we evaluate the learned policy after every episode. The evaluation is done by running the current policy on 20 random episodes and then computing the average return over them.
Results. Figure 5.1 shows that POUM has a sample efficiency compared to the baseline algorithms across a wide range of environments. Furthermore, it also has the asymptotic performance competitive to or even better than that of the model-free counterparts. Note that, there are horizontal parts at the beginning of evaluation curves in some algorithms and environments, that because these algorithms take random exploration at the beginning of training (as their default configuration) to initialize dynamics. For simple environments: Pendulum-v0, Reacher-v2, Push-v2, our POUM can get a good performance without initialized dynamics1.
However, Figure 5.1 also shows that the performance of POUM in more complex environment like HalfCheetah-v2 is sensitive to random seeds. We hypothesize that this is due to the impact of riskpreference value on policy optimization framework and our strategy of aggressive online learning and linearly weighted batch sampling. The ablation study below validates our current analysis on these hypotheses.
5.2 ABLATION STUDY
To obtain a better understanding about the role of the subjective risk preference in the utility function, we conduct an ablation study on the parameter c that controls this risk factor (Equation 2) and make the following observations.
As illustrated in Figure 5.2, as the first observation, complete zero risk is not a good choice. This behavior is expected because no risk means no uncertainty is quantified properly, leading to agents
1MBPO failed to attain a good performance for Reacher-v2 and Pusher-v2, regardless our best effort to produce the results based on the authors’ official repository.
could not learn well to model the dynamics, as well as optimize an efficient policy. In another observation, POUM performs best with the risks c = −1, follows by c = −2 and c = −3, while it gets worse and worse at both directions, the risk factor goes either higher or lower. This phenomenon is because the risk factor controls the scale of standard deviation, and hence the variance. Too low or too high risks, consequently, imply too much variance which is not favorable in many cases because
it indudes agents to explore more aggressively and hence suffer more potential failures, while not exploiting current, safer experiences. Finally and interestingly, as the scale of the risk changes, both directions are not behaving the same. In particular, POUM gets bad results with positive riskpreference value, and even can not learn with high positive value. That is because in current work, we use fixed the subjective risk preference value and at the beginning of learning process, the dynamics models are unstable and high variance, with high positive risk-preference values, policy learning strange decisions. In contrast, with negative risk-preference, our utility function work as lower confidence bound that keep policy in a safe region. The figure indicates that with lower negative risk-preference value, the learning curve is more stable. However, lower risk-preference value means that less exploration, and results in lower final reward.
6 DISCUSSION AND CONCLUSION
In summary, this paper proposed a new approach in MBRL in which we developed a novel objective function that balances the mean and variance in the estimation of the value function, which is induced by the model. Our experiments suggest that our POUM algorithm not only can achieve the asymptotic performance of model-free methods in challenging continuous control tasks and compared to other SoTA approaches, it does so in much fewer samples. We further demonstrate that the model bias issue in model-based RL can be dealt with effectively with principled and careful uncertainty quantification, by guiding agents with a subjective risk factor. Unlike other methods, quantifying and controlling the uncertainty with a novel uncertainty-aware objective function, and without any complex designs for other components is an advantage, of being simple yet efficient, compared with others.
Nonetheless, we acknowledge that our current implementation for POUM still has several limitations, such as high variance in the empirical performance, which still depends on many hyperparameters (plan horizon, risk sensitivity, and all hyper-parameters associated with neural networks training techniques) and even depends on random seeds. It is, however, worth noting that these traits are not unique to our methods. In spite of this limitation, the results indicate that if implemented properly, MBRL methods can be both sample efficient and have better asymptotic performances than the MFRL counterparts on challenging tasks. In addition, by explicitly representing both the dynamics model and the policy, POUM enables transfer learning, not just for the world (dynamics) model but also for the policy.
To sum up, we identify that sample efficiency, off-policy learning, and transferability are three necessary, albeit not sufficient, properties for real-world reinforcement learning. We claim that our method meets these criteria and hence is a step towards real-world reinforcement learning.
A APPENDIX
A.1 WHY LINEARLY WEIGHTED RANDOM SAMPLING IS A FAIRER SAMPLING SCHEME
Consider the following online learning process: for each time step, we need to randomly sample an example from the accumulating dataset. Suppose that at time t, each example ith is randomly sampled with weight w(t, i). Note that at each time t, we have a total of t examples in the dataset. Then the probability of that example being sampled is
w(t, i)∑t k=1 w(t, k) .
If we use uniformly random sampling then the expected number of times an example ith gets selected until time t is
Cti = t∑ k=i 1 k .
Hence, for all t, for i > j, Cti is larger than C t j by ∑j k=i 1 k . Now, if we use a linearly weighted random sampling scheme, in which w(t, i) = i, then the expected number of times an example ith gets selected until time t is
Cti = t∑ k=i
2i
k(k + 1) = 2 t∑ k=i ( i k − i k + 1 ) = 2− 2 i t .
We can see that at time t, Cti is still larger than C t j for i < j but by weighting recent examples more in each online update step, we reduce the overall early-data bias.
A.2 EXPERIMENTAL SETTINGS
A.2.1 ENVIRONMENTS
Table 1: Description of the environment used for testing
Environment State dimension Action dimension Task horizon
Reacher-v2 11 2 50 Pusher-v2 23 27 100 Pendulum-v0 3 1 200 HalfCheetah-v2 23 6 1000
A.2.2 NETWORK ARCHITECTURE
A.2.3 HYPER-PARAMETER SETTINGS | 1. What is the focus of the paper, and how does it contribute to the field of Model Based Reinforcement Learning?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its experimental results and comparison to other methods in the literature?
3. How does the reviewer assess the clarity and quality of the paper's content, including the discussion of the environment model, the convergence analysis, and the algorithm updates?
4. Are there any concerns or questions regarding the paper's methodology, such as the dependence on random seeds, the deterministic policy, and the use of SGD?
5. Are there any suggestions for future work or potential improvements to the proposed approach, such as combining it with other methods or addressing specific limitations? | Review | Review
I enjoyed this paper overall, and I think the idea is a good one. However there remain significant issues with the paper that preclude me giving a good score. Firstly, there is almost no discussion of the environment model. Anyone who has worked on Model Based RL will tell you that the details here are crucial. This deserves a full discussion, and a comparison to other methods in the literature.
Next, the experimental results really aren't convincing. The dependence on random seeds is worrying, and isn't as common in model free algorithms as you claim, which are mostly robust to seeds (the good ones at least). The fact that the best policy is risk *averse* is very strange, since these are estimates combining both the epistemic and aleatoric uncertainty (which is somewhat unfortunate), which means being risk averse would lead the agent to not explore. That is very worrying and makes me think that something very strange is going on with the models. In fact since the policy is deterministic and the environment / rewards are practically speaking deterministic, the uncertainty here is actually mostly epistemic and so a c < 0 means the agent is disincentivized from exploring.
There should be more discussion about the fact that the policy is deterministic. Is this merely to make estimating V^pi easier?
"Next, we provide a convergence convergence analysis and show that maximizing this utility function
U(π) is equivalent to maximizing the unknown value function V (π)."
Word convergence appears twice in a row, but more importantly this is totally missing! Where is the analysis?
In the algorithm you write:
"Update {fb} and rˆφ using SGD"
But on what data? Presumably sampled from D but this isn't mentioned.
Is it the case that the policy is updated *only* using the model based rollouts? I.e., the reward signal is never used in the policy gradient but only used to train the models? If so, this seems quite fragile and I would like to see a comparison of different approaches here.
Table 2 is unreadable and needs to be explained.
It would appear that you are missing a reference to the very relevant UBE paper, which also deals explicitly with the uncertainty of the value function estimates: https://arxiv.org/abs/1709.05380
In fact I would be curious to see any way that these two approaches could be combined (though that would be follow up work). |
ICLR | Title
Policy Optimization In the Face of Uncertainty
Abstract
Model-based reinforcement learning has the potential to be more sample efficient than model-free approaches. However, existing model-based methods are vulnerable to model bias, which leads to poor generalization and asymptotic performance compared to model-free counterparts. In this paper, we propose a novel policy optimization framework using an uncertainty-aware objective function to handle those issues. In this framework, the agent simultaneously learns an uncertaintyaware dynamics model and optimizes the policy according to these learned models. Under this framework, the objective function can represented end-to-end as a single computational graph, which allows seamless policy gradient computation via backpropagation through the models. In addition to being theoretically sound, our approach shows promising results on challenging continuous control benchmarks with competitive asymptotic performance and sample complexity compared to state-of-the-art baselines.
N/A
Model-based reinforcement learning has the potential to be more sample efficient than model-free approaches. However, existing model-based methods are vulnerable to model bias, which leads to poor generalization and asymptotic performance compared to model-free counterparts. In this paper, we propose a novel policy optimization framework using an uncertainty-aware objective function to handle those issues. In this framework, the agent simultaneously learns an uncertaintyaware dynamics model and optimizes the policy according to these learned models. Under this framework, the objective function can represented end-to-end as a single computational graph, which allows seamless policy gradient computation via backpropagation through the models. In addition to being theoretically sound, our approach shows promising results on challenging continuous control benchmarks with competitive asymptotic performance and sample complexity compared to state-of-the-art baselines.
1 INTRODUCTION
Popular reinforcement learning (RL) algorithms are divided into two main paradigms: model-free (MFRL) and model-based (MBRL) types. While achieving good asymtotic performances in many high dimensional problems (Mnih et al., 2015; Silver et al., 2017; Schulman et al., 2017; Hessel et al., 2018; Espeholt et al., 2018), MFRL methods suffer from high sample complexity since they learn state/state-action values only from rewards and do not explicitly exploit the rich information underlying the transition dynamics data. On the contrary, MBRL approaches, by trying model the transition dynamics that are in turn used for planning without having to frequently interacting with real systems, are known to have sample efficiency and thus possess more practicability (Deisenroth et al., 2013; Finn et al., 2016; Ebert et al., 2018; Sutton & Barto, 2018; Kaiser et al., 2019).
Current MBRL methods, however, still have limitations because the accuracy of the learned dynamics model is usually not satisfied, especially in complex environments (Zhang et al., 2018; Lowrey et al., 2018). The model error and its compounding effect when planning, i.e. a small bias in the model can lead to a highly erroneous value function estimate and a strongly-biased suboptimal policy, make MBRL less competitive in terms of asymptotic performance than MFRL for many non-trivial tasks. Numerous attempts have been made to tackle with this model bias problem but none of them have been really successful, such as using Gaussian Process (GP) (Deisenroth & Rasmussen, 2011; Gal & Ghahramani, 2016), Bayesian Neural Networks (Gal et al., 2016; Depeweg et al., 2016a; Kamthe & Deisenroth, 2017), and Emsembling (Kurutach et al., 2018; Clavera et al., 2018).
Another limitation of many existing MBRL methods is that they rely on the model predictive control (MPC) framework (Garcia et al., 1989). While being commonly used, MPC has serveral drawbacks (Atkeson & Schaal, 1997; Thananjeyan et al., 2019). First, each step requires solving a highdimensional optimization problem and thus is computationally prohibitive for applications requiring either real-time or low-latency reaction such as autonomous driving. Second, the policy is only implicit via solving the mentioned optimization problem. Not being able to explicitly represent the policy makes it hard to transfer the learned policy to other tasks or to initialize agents with an existing better-than-random policy.
Contributions. To address those challenges of MBRL, we propose a new framework called Policy Optimization with Uncertainty-aware Model (POUM) that is able to optimize in the face of uncertainty. Our policy optimization is based on Policy Gradient, which has been widely adopted in MFRL (Lillicrap et al., 2015; Schulman et al., 2017; Haarnoja et al., 2018). However, in POUM, the objective function, a utility function, is formulated around the uncertainty-aware dynamics model. This utility function takes into account both the mean and the variance of the value function estimate. This helps reducing the model bias while effectively approximating true objective, which is the value function of the policy. For experiments, we demonstrate the advantages of POUM over state-of-the-art (SoTA) methods on various RL tasks given training from scratch and all the environments are unaltered, and also investigate on how much risk is tolerable in those tasks. And last, POUM can be represented end-to-end in a single computation graph, which greatly facilitates the training.
2 RELATED WORK
Traditional MBRL. Initial successes of MBRL in continuous control achieved promising results by learning control policies trained on models of local dynamics using linear parametric approximators (Abbeel et al., 2007; Levine & Koltun, 2013). Alternative methods such as Deisenroth & Rasmussen (2011); Levine & Koltun (2013) incorporated non-parametric probabilistic GPs to capture model uncertainty during policy planning and evaluation. While these methods enhance data efficiency in low-dimensional tasks, their applications in more challenging domains such as environments involving non-contact dynamics and high-dimensional control remain limited by the inflexibility of their temporally local structure and intractable inference time. Our approach, on the contrary, pushes the uncertainty modeling to the objective function and not anywhere else in the architecture. Plus, the fact that this objective is designed to propagate all the way to the value function makes it versatile in capturing uncertainty. What is more, all core components are constructed by neural networks gives our solution more power in dealing with high-dimensional tasks, thus acquiring asymptotically high performance compared to MFRL methods and, at the same time, retaining data efficiency in those complex domains.
Deep Neural Networks (DNNs). Recently, there has been a revived interest in using DNNs to learn predictive models of environments from data, drawing inspiration from ideas in the early literature on this MBRL field, mainly because the large representational capacity enables them as suitable function approximators for complex environments, especially that involve images or videos (Ebert et al., 2018; Kaiser et al., 2019). However, additional care has to be usually taken to avoid model bias, a situation where the DNNs overfit in the early stages of learning, resulting in inaccurate models. For example, Depeweg et al. (2016b) modeled a Bayesian type of DNNs to capture uncertainty in transition dynamics. In another approach, Nagabandi et al. (2017) combined a learned dynamics network with MPC to initialize the policy network to accelerate learning in model-free deep RL. Chua et al. (2018) extended this idea by introducing a bootstrapped ensemble of probabilistic DNNs to model predictive uncertainty of the learned networks and demonstrating that a pure model-based approach can attain the asymptotic performance of MFRL counterparts. However, the use of MPC to define a policy leads to poor run-time execution and hard to transfer policy across tasks. On the contrary, our framework is much simpler in that we do not employ any extra method to model the dynamics uncertainty into DNNs that are already complicated itself with numerous architectures and hyperparameters, but instead formulate a single, new uncertainty-aware objective for end-to-end optimization.
Ensemble. Another group of work leveraged the learned ensemble of dynamics models to train a policy network. Kurutach et al. (2018) learned a stochastic policy via trust-region policy optimization, and Clavera et al. (2018) casted the policy gradient as a meta-learning adaptation step with respect to each member of the ensemble. Buckman et al. (2018) proposed an algorithm to learn a weighted combination of roll-outs of different horizon lengths, which dynamically interpolates between model-based and model-free learning based on the uncertainty in the model predictions. To our knowledge, this is the closest work in aside from ours, which learns a reward function in addition to the dynamics function. But none of the aforementioned work propagates the uncertainty all the way to the value function and uses the concept of utility function to balance risk and return as in our model.
Finally, ensemble of DNNs also provide a straightforward technique to obtain reliable estimates of predictive uncertainty (Lakshminarayanan et al., 2017) and has been integrated with bootstrap to guide exploration in MFRL (Osband et al., 2016; Janner et al., 2019). While many of the approaches mentioned in this section employ bootstrap to train an ensemble of models, we note that their implementations comprise of reconstructing bootstrap datasets at every training iteration, which effectively trains every single data sample and thus diminishes the advantage on uncertainty quantification achieved through bootstrapping. Except for a novel objective formulation, our model is different in that, to maintain online bootstrapped datasets across ensembles, it adds each incoming data sample to a dataset according to a Poisson probability distribution (Park et al., 2007; Qin et al., 2013), thereby guaranteeing asymptotically consistent those datasets.
3 UNCERTAINTY-AWARE MODEL-BASED POLICY OPTIMIZATION
3.1 BACKGROUND
Consider a discrete-time Markov Decision Process (MDP) defined by a tuple M = {S,A, f, r, γ}, in which S is a state space, A is an action space, f : S ×A→ S is a deterministic (or probabilistic) transition function, r : S × A → R is a deterministic reward function, and γ ∈ (0, 1) is a discount factor. We define the return as sum of the rewards r (st, at) = r (st, π(st)) for t = 0, . . . , T for the whole trajectory (s0, a0, ..., sT , aT ) induced by a policy π : S → A and discounted by γ. Here T ∈ Z+ is a task horizon, which may take a value of∞ for non-episodic environments. The goal of RL is to find an optimal policy π? to maximize the expected return
J(π) = E s0∼S
[V π(s0)] (1)
where the value function is defined as V π(s0) = ∑T−1 t=0 γ
t r(st, π(st)), and the state transition is st+1 = f(st, π(st)), with s0 being randomly chosen from the distribution of s ∈ S. Then if the dynamics function f and the reward function r are given, solving Equation 1 can be done using the Calculus of Variations (Young, 2000) or using Policy Gradient (Sutton et al., 2000) when the control function is parameterized or is finite dimensional.
In RL, however, f and r are often unknown and hence Equation 1 becomes a blackbox optimization problem with an unknown objective function. Following the Bayesian approach commonly used in the blackbox optimization literature (Shahriari et al., 2015), we propose to solve this problem by iteratively learning a probabilistic estimate V̂ of V from data and optimizing the policy according to this approximate model, as detailed in the next section.
3.2 FORMULATION OF UNCERTAINTY-AWARE OPTIMIZATION OBJECTIVE
It is worth noting that any unbiased method would model V̂ (π) as a probabilistic estimate, i.e. V̂ (π) would be a distribution (as opposed to a point estimate) for a given policy π. Optimizing a stochastic objective is, however, not well-defined. Our solution is to transform V̂ into a deterministic utility function that reflects a subjective measure balancing the risk and return. Following Markowitz (1952); Sato et al. (2001); Garcıa & Fernández (2015), we propose a risk-sensitive objective criterion using a linear combination of the mean and the standard deviation of V̂ (π). Formally stated, our objective criterion, which we also call the utility function, now becomes
U(π)(s0) = E s0∼S
[ µ ( V̂ (π)(s0) ) + c× σ ( V̂ (π)(s0) )] , (2)
where µ and σ are respectively the mean and the standard deviation of V̂ (π)(s0), and c is a constant that represents the subjective risk preference of the learning agent. A positive risk preference infers that the agent is adventurous while a negative risk preference indicates that the agent has a safe exploration strategy. To our best knowledge, this uncertainty-aware model-based objective function has not been used in the RL literature.
3.3 EMPIRICAL ESTIMATE OF VALUE FUNCTION
Section 3.2 provides a general framework for policy optimization under uncertainty, assuming the availability of the estimation model V̂ (π) of the true value function V (π). In this section, we
describe how to estimate V̂ (π) with a model-based approach. The main idea is to approximate the functions {f, r} with probabilistic parametric models {f̂ , r̂} and fully propagate the estimated uncertainty when planning under each policy π from an initial state s0. The value function estimate V̂ can be formulated as
V̂ (π)(s0) = T−1∑ t=0 γtr̂ (ŝt, π(ŝt)) , (3)
where ŝ0 = s0 and ŝt+1 = f̂(ŝt, π(ŝt)) for t = 0, . . . , T − 1. Next, we describe how to efficiently model {f, r} with well-calibrated uncertainty and a rollout technique that allows the uncertainty to be faithfully propagated into V̂ (π).
3.3.1 BOOTSTRAP SETUP FOR MODEL LEARNING
Following the traditional bootstrap methodology, the empirical model function f̂ is represented as {f̂φk(st, at) → st+1}Bk=1. For simplicity of implementation, we model each bootstrap replica as deterministic and rely on the ensemble as the sole mechanism for quantifying and propagating uncertainty. Each bootstrapped model f̂φk , which is parameterized by φk, learns to minimize the L2 one-step prediction loss over the respective bootstrapped dataset Dk:
min φk:=17→B
E(st,at,st+1)∼Dk‖st+1 − f̂φk(st, at)‖ 2 2. (4)
The training dataset D, from which the bootstrapped datasets {Dk}Bk=1 are sampled, stores the transitions on which the agent has experienced. Since each model observes its own subset of the real data samples, the predictions across the ensemble remain sufficiently diverse in the early stages of the learning and will then converge to their true values as the error of the individual networks decreases.
In addition to model estimation and unlike many other model-based approaches, we also learn the reward function along the same design of classical MBRL algorithms Sutton (1991). But in POUM, we use a deterministic model (also parameterized by a DNN) for the reward function to simplify the policy evaluation.
3.3.2 BOOTSTRAP ROLLOUT
In this section, we describe how to propagate the estimates with uncertainty from the dynamics model to evaluate a policy π. We represent our policy πθ : S → A as a neural network parameterized by θ . Note that we choose to represent our policy as deterministic. We argue that while all estimation models, including that of the dynamics and of the value function, need to be stochastic (i.e. uncertainty-aware), the policy does not need to be. The policy is not an estimator and deterministic policy simply means that the agent is consistent when taking an action, no matter how uncertain it may know about the world.
Given a deterministic policy πθ and an initial state s0 ∈ D, we can estimate the distribution of V (π)(s0) by simulating πθ through each each bootstrapped dynamics model. And since each bootstrap model is an independent approximator of the dynamics function, by expanding the value function via these dynamics approximators, we eventually obtain independent estimates of that value function. Finally, those separate and independent trajectories collectively form an ensemble estimator of V .
In practice, we sample these trajectories with a finite horizon H < T . It is still a challenge to expand the value function estimation for a very long horizon due to a few reasons. First, DNNs training becomes harder when the depth increases. Second, despite our best effort to control the uncertainty, we still do not have a guarantee that our uncertainty modeling is perfectly calibrated, which in turn may be problematic if the planning horizon is too large. Finally, policy learning time is proportional to the rollout horizon.
3.4 POLICY GRADIENT
Based on Equation 2, the optimization target to optimize based on policy gradient method is:
argmax θ
J(θ) = Es∼S [Uθ(s)] , (5)
where Uθ(s) = µ(V̂θ(s)) + c × σ(V̂θ(s)). Using the ensemble method and the rollout technique described above, we can naturally compute µ(V̂θ(s)) and σ(V̂θ(s)) for a given policy πθ and for a given state s. Therefore, the policy πθ can be updated using the SGD or a variance of it.
Importantly, in terms of implementation, it is worth noting that the aforementioned rollout method also allows for easily expressing U(θ) in Equation 5 as a single computational graph of θ. This makes it straightforward to compute the policy gradient ∇θUθ(s) using automatic differentiation, a feature provided by default in most popular deep learning toolkits.
4 ALGORITHM SUMMARY
Algorithm 1 Policy Optimization with Uncertainty-aware Model (POUM)
1: Initialize a training dataset D, bootstrapped datasets {Di}Bi=1, parameterized bootstrapped models {f̂i}Bi=1, a parameterized reward model r̂φ, and a parameterized deterministic policy πθ. 2: while not done do 3: • Step in the environment, collect new data point (s, a, s′, r) and push into D, 4: • Sample from D and push data into the bootstrapped replay buffers: for each member ith in the ensemble, add zi ∼ Poisson(1) copies of that data point to Di, 5: • Update {f̂i}Bi=1 on Di and r̂φ on D using SGD, 6: • Evaluate V̂θ(s) and Uθ(s) by simulating through the learned models {f̂i}Bi=1 and r̂φ, 7: • Update πθ using SGD with the policy gradient being backpropagated on Es [Uθ(s)] through the learned models. 8: end while
We summarize our framework POUM in Algorithm 1 and later in this section, we will also highlight some important details in our implementation.
4.1 DYNAMICS MODEL LEARNING WITH ONLINE BOOTSTRAP
As discussed in Section 1, there are several prior attempts to learn uncertainty-aware dynamics models such as GPs, Bayesian neural networks (NNs), dropout NNs and ensemble of NNs. In this work, however, we employ an ensemble of bootstrapped DNNs. Bootstrap is a generic, principled and statistical approach for uncertainty quantification. Furthermore, as will be also later explained in Section 3.4, this ensembling approach also gives rise to easy gradient computation.
4.1.1 ONLINE BOOTSTRAP FOR TRAINING DATA
Bootstrap learning is often studied in the context of batch learning. However, since our agent updates its empirical model F̂ after each physical step for the best possible sample efficiency, we follow an online bootstrapping method by sampling from Poisson distribution (Oza, 2005; Qin et al., 2013). This is a very effective online approximation to batch bootstrapping, and can be easily done by this simple rule: bootstrapping a dataset D with n examples means sampling n examples from D with replacement. In detail, each example i will appear zi times in the bootstrapped sample where zi is a random variable whose distribution is Binom(n, 1/n) because during resampling, the ith example will have n chances to be picked, each with probability 1/n. This Binom(n, 1/n) distribution converges to Poisson(1) when n → ∞. Therefore, for each new data point, this method adds zk copies of that data point to the bootstrapped dataset Dk, where zk is sampled from a Poisson(1).
Online off-policy learning. Except for the initialization step (we may initialize the models with batch training from off-policy data), our model learning is an online learning process. For each time
step, the learning cost stays constant and does not grow over time, which is required for lifelong learning. Despite being online, the learning is off-policy because we maintain a bootstrapped replay buffer for each model in the ensemble. For each model update, we sample a minibatch of training data from the respective replay buffer. In addition, as mentioned, the models can also be initialized from existing data even before the policy optimization starts.
4.1.2 LINEARLY-WEIGHTED SAMPLING FROM BOOTSTRAPPED TRAINING DATA
Since our replay buffers are accumulated online, a naive uniformly sampling strategy would lead to early data being sampled more frequently than the later ones. We thus propose a linearly weighted random sampling scheme to mitigate this early-data bias issue. In this sampling scheme, example ith is randomly sampled with weight i, i.e. higher weights for the fresher examples in each online update step. Despite its simplicity, this scheme plays an important role in data bias removal, as shown in Appendix A.1.
5 EXPERIMENT
Our experiments are designed to help 1) compare our POUM framework with other SoTA approaches and 2) investigate the impact of the risk factor in our utility function on guiding agents.
5.1 COMPARISON TO BASELINE ALGORITHMS
Experimental Design. We evaluate the performance of our POUM algorithm on four continuous control tasks including: one classic control task (Pendulum-v0) and three other tasks in the MuJoCo simulator (Todorov et al., 2012) from OpenAI Gym (Brockman et al., 2016). It is important to note that, we keep the default configurations prodived by OpenAI Gym (See Appendix A.2.1) and also does not assume access to the reward function as some recent works in model-based reinforcement learning (Chua et al., 2018; Clavera et al., 2018; Kurutach et al., 2018).
For the baselines, we compare POUM to the following SoTA algorithms designed for continuous control: MBPO (Janner et al., 2019), DDPG (Lillicrap et al., 2015), SAC (Haarnoja et al., 2018), STEVE (Buckman et al., 2018). For each one of them, we evaluate the learned policy after every episode. The evaluation is done by running the current policy on 20 random episodes and then computing the average return over them.
Results. Figure 5.1 shows that POUM has a sample efficiency compared to the baseline algorithms across a wide range of environments. Furthermore, it also has the asymptotic performance competitive to or even better than that of the model-free counterparts. Note that, there are horizontal parts at the beginning of evaluation curves in some algorithms and environments, that because these algorithms take random exploration at the beginning of training (as their default configuration) to initialize dynamics. For simple environments: Pendulum-v0, Reacher-v2, Push-v2, our POUM can get a good performance without initialized dynamics1.
However, Figure 5.1 also shows that the performance of POUM in more complex environment like HalfCheetah-v2 is sensitive to random seeds. We hypothesize that this is due to the impact of riskpreference value on policy optimization framework and our strategy of aggressive online learning and linearly weighted batch sampling. The ablation study below validates our current analysis on these hypotheses.
5.2 ABLATION STUDY
To obtain a better understanding about the role of the subjective risk preference in the utility function, we conduct an ablation study on the parameter c that controls this risk factor (Equation 2) and make the following observations.
As illustrated in Figure 5.2, as the first observation, complete zero risk is not a good choice. This behavior is expected because no risk means no uncertainty is quantified properly, leading to agents
1MBPO failed to attain a good performance for Reacher-v2 and Pusher-v2, regardless our best effort to produce the results based on the authors’ official repository.
could not learn well to model the dynamics, as well as optimize an efficient policy. In another observation, POUM performs best with the risks c = −1, follows by c = −2 and c = −3, while it gets worse and worse at both directions, the risk factor goes either higher or lower. This phenomenon is because the risk factor controls the scale of standard deviation, and hence the variance. Too low or too high risks, consequently, imply too much variance which is not favorable in many cases because
it indudes agents to explore more aggressively and hence suffer more potential failures, while not exploiting current, safer experiences. Finally and interestingly, as the scale of the risk changes, both directions are not behaving the same. In particular, POUM gets bad results with positive riskpreference value, and even can not learn with high positive value. That is because in current work, we use fixed the subjective risk preference value and at the beginning of learning process, the dynamics models are unstable and high variance, with high positive risk-preference values, policy learning strange decisions. In contrast, with negative risk-preference, our utility function work as lower confidence bound that keep policy in a safe region. The figure indicates that with lower negative risk-preference value, the learning curve is more stable. However, lower risk-preference value means that less exploration, and results in lower final reward.
6 DISCUSSION AND CONCLUSION
In summary, this paper proposed a new approach in MBRL in which we developed a novel objective function that balances the mean and variance in the estimation of the value function, which is induced by the model. Our experiments suggest that our POUM algorithm not only can achieve the asymptotic performance of model-free methods in challenging continuous control tasks and compared to other SoTA approaches, it does so in much fewer samples. We further demonstrate that the model bias issue in model-based RL can be dealt with effectively with principled and careful uncertainty quantification, by guiding agents with a subjective risk factor. Unlike other methods, quantifying and controlling the uncertainty with a novel uncertainty-aware objective function, and without any complex designs for other components is an advantage, of being simple yet efficient, compared with others.
Nonetheless, we acknowledge that our current implementation for POUM still has several limitations, such as high variance in the empirical performance, which still depends on many hyperparameters (plan horizon, risk sensitivity, and all hyper-parameters associated with neural networks training techniques) and even depends on random seeds. It is, however, worth noting that these traits are not unique to our methods. In spite of this limitation, the results indicate that if implemented properly, MBRL methods can be both sample efficient and have better asymptotic performances than the MFRL counterparts on challenging tasks. In addition, by explicitly representing both the dynamics model and the policy, POUM enables transfer learning, not just for the world (dynamics) model but also for the policy.
To sum up, we identify that sample efficiency, off-policy learning, and transferability are three necessary, albeit not sufficient, properties for real-world reinforcement learning. We claim that our method meets these criteria and hence is a step towards real-world reinforcement learning.
A APPENDIX
A.1 WHY LINEARLY WEIGHTED RANDOM SAMPLING IS A FAIRER SAMPLING SCHEME
Consider the following online learning process: for each time step, we need to randomly sample an example from the accumulating dataset. Suppose that at time t, each example ith is randomly sampled with weight w(t, i). Note that at each time t, we have a total of t examples in the dataset. Then the probability of that example being sampled is
w(t, i)∑t k=1 w(t, k) .
If we use uniformly random sampling then the expected number of times an example ith gets selected until time t is
Cti = t∑ k=i 1 k .
Hence, for all t, for i > j, Cti is larger than C t j by ∑j k=i 1 k . Now, if we use a linearly weighted random sampling scheme, in which w(t, i) = i, then the expected number of times an example ith gets selected until time t is
Cti = t∑ k=i
2i
k(k + 1) = 2 t∑ k=i ( i k − i k + 1 ) = 2− 2 i t .
We can see that at time t, Cti is still larger than C t j for i < j but by weighting recent examples more in each online update step, we reduce the overall early-data bias.
A.2 EXPERIMENTAL SETTINGS
A.2.1 ENVIRONMENTS
Table 1: Description of the environment used for testing
Environment State dimension Action dimension Task horizon
Reacher-v2 11 2 50 Pusher-v2 23 27 100 Pendulum-v0 3 1 200 HalfCheetah-v2 23 6 1000
A.2.2 NETWORK ARCHITECTURE
A.2.3 HYPER-PARAMETER SETTINGS | 1. What is the main contribution of the paper in model-based reinforcement learning?
2. How does the proposed method capture uncertainty in learning state value functions?
3. Can you explain the conversion from the objective function in Equation 2 to Equation 5?
4. How does the proposed method alleviate uncertainty in model predictions?
5. What is the objective function for learning the reward function r_phi?
6. Are there any concerns regarding the experimental results, such as the sufficiency of the experiments or the comparison with other methods?
7. Are there any writing issues in the paper, such as typos or unclear presentation? | Review | Review
Summary:
The main contribution of this work is introducing the uncertainty-aware value function prediction into model-based RL, which can be used to balance the risk and return empirically.
Methodology
This work uses a linear combination of the mean and standard deviation of value function to capture the uncertainty in learning state value function.
It is not clear how to convert the objective function from Eq 2 (expectation over the initial state) to Eq 5 (expectation over all states). Those two objectives are not equal.
It is not clear how does the uncertainty in model prediction (dynamics and reward function)
can be alleviated through the proposed method, as claimed in the introduction.
It seems the novelty part lies in considering the uncertainty of value function estimation.
How does this relate to solving the limitation of model predictive control?
What is the objective function for learning reward function r_\phi?
Experimental results:
The experiments are not sufficient to demonstrate the effectiveness of the proposed method.
It would be more convincing to compare the proposed method with a few more model-based approaches on more tasks. The results of MBPO is better
the proposed POUM in one of two tasks. The performance of
MBPO on Reacher-v2 and Pusher-v2 is missing?
Writing:
This paper has many typos and the presentation is not very clear.
- Section 4.1 "in Section 3.4,"
- Last paragraph in P3: convergence convergence analysis
- "shows that POUM has a sample efficiency compared" |
ICLR | Title
Policy Optimization In the Face of Uncertainty
Abstract
Model-based reinforcement learning has the potential to be more sample efficient than model-free approaches. However, existing model-based methods are vulnerable to model bias, which leads to poor generalization and asymptotic performance compared to model-free counterparts. In this paper, we propose a novel policy optimization framework using an uncertainty-aware objective function to handle those issues. In this framework, the agent simultaneously learns an uncertaintyaware dynamics model and optimizes the policy according to these learned models. Under this framework, the objective function can represented end-to-end as a single computational graph, which allows seamless policy gradient computation via backpropagation through the models. In addition to being theoretically sound, our approach shows promising results on challenging continuous control benchmarks with competitive asymptotic performance and sample complexity compared to state-of-the-art baselines.
N/A
Model-based reinforcement learning has the potential to be more sample efficient than model-free approaches. However, existing model-based methods are vulnerable to model bias, which leads to poor generalization and asymptotic performance compared to model-free counterparts. In this paper, we propose a novel policy optimization framework using an uncertainty-aware objective function to handle those issues. In this framework, the agent simultaneously learns an uncertaintyaware dynamics model and optimizes the policy according to these learned models. Under this framework, the objective function can represented end-to-end as a single computational graph, which allows seamless policy gradient computation via backpropagation through the models. In addition to being theoretically sound, our approach shows promising results on challenging continuous control benchmarks with competitive asymptotic performance and sample complexity compared to state-of-the-art baselines.
1 INTRODUCTION
Popular reinforcement learning (RL) algorithms are divided into two main paradigms: model-free (MFRL) and model-based (MBRL) types. While achieving good asymtotic performances in many high dimensional problems (Mnih et al., 2015; Silver et al., 2017; Schulman et al., 2017; Hessel et al., 2018; Espeholt et al., 2018), MFRL methods suffer from high sample complexity since they learn state/state-action values only from rewards and do not explicitly exploit the rich information underlying the transition dynamics data. On the contrary, MBRL approaches, by trying model the transition dynamics that are in turn used for planning without having to frequently interacting with real systems, are known to have sample efficiency and thus possess more practicability (Deisenroth et al., 2013; Finn et al., 2016; Ebert et al., 2018; Sutton & Barto, 2018; Kaiser et al., 2019).
Current MBRL methods, however, still have limitations because the accuracy of the learned dynamics model is usually not satisfied, especially in complex environments (Zhang et al., 2018; Lowrey et al., 2018). The model error and its compounding effect when planning, i.e. a small bias in the model can lead to a highly erroneous value function estimate and a strongly-biased suboptimal policy, make MBRL less competitive in terms of asymptotic performance than MFRL for many non-trivial tasks. Numerous attempts have been made to tackle with this model bias problem but none of them have been really successful, such as using Gaussian Process (GP) (Deisenroth & Rasmussen, 2011; Gal & Ghahramani, 2016), Bayesian Neural Networks (Gal et al., 2016; Depeweg et al., 2016a; Kamthe & Deisenroth, 2017), and Emsembling (Kurutach et al., 2018; Clavera et al., 2018).
Another limitation of many existing MBRL methods is that they rely on the model predictive control (MPC) framework (Garcia et al., 1989). While being commonly used, MPC has serveral drawbacks (Atkeson & Schaal, 1997; Thananjeyan et al., 2019). First, each step requires solving a highdimensional optimization problem and thus is computationally prohibitive for applications requiring either real-time or low-latency reaction such as autonomous driving. Second, the policy is only implicit via solving the mentioned optimization problem. Not being able to explicitly represent the policy makes it hard to transfer the learned policy to other tasks or to initialize agents with an existing better-than-random policy.
Contributions. To address those challenges of MBRL, we propose a new framework called Policy Optimization with Uncertainty-aware Model (POUM) that is able to optimize in the face of uncertainty. Our policy optimization is based on Policy Gradient, which has been widely adopted in MFRL (Lillicrap et al., 2015; Schulman et al., 2017; Haarnoja et al., 2018). However, in POUM, the objective function, a utility function, is formulated around the uncertainty-aware dynamics model. This utility function takes into account both the mean and the variance of the value function estimate. This helps reducing the model bias while effectively approximating true objective, which is the value function of the policy. For experiments, we demonstrate the advantages of POUM over state-of-the-art (SoTA) methods on various RL tasks given training from scratch and all the environments are unaltered, and also investigate on how much risk is tolerable in those tasks. And last, POUM can be represented end-to-end in a single computation graph, which greatly facilitates the training.
2 RELATED WORK
Traditional MBRL. Initial successes of MBRL in continuous control achieved promising results by learning control policies trained on models of local dynamics using linear parametric approximators (Abbeel et al., 2007; Levine & Koltun, 2013). Alternative methods such as Deisenroth & Rasmussen (2011); Levine & Koltun (2013) incorporated non-parametric probabilistic GPs to capture model uncertainty during policy planning and evaluation. While these methods enhance data efficiency in low-dimensional tasks, their applications in more challenging domains such as environments involving non-contact dynamics and high-dimensional control remain limited by the inflexibility of their temporally local structure and intractable inference time. Our approach, on the contrary, pushes the uncertainty modeling to the objective function and not anywhere else in the architecture. Plus, the fact that this objective is designed to propagate all the way to the value function makes it versatile in capturing uncertainty. What is more, all core components are constructed by neural networks gives our solution more power in dealing with high-dimensional tasks, thus acquiring asymptotically high performance compared to MFRL methods and, at the same time, retaining data efficiency in those complex domains.
Deep Neural Networks (DNNs). Recently, there has been a revived interest in using DNNs to learn predictive models of environments from data, drawing inspiration from ideas in the early literature on this MBRL field, mainly because the large representational capacity enables them as suitable function approximators for complex environments, especially that involve images or videos (Ebert et al., 2018; Kaiser et al., 2019). However, additional care has to be usually taken to avoid model bias, a situation where the DNNs overfit in the early stages of learning, resulting in inaccurate models. For example, Depeweg et al. (2016b) modeled a Bayesian type of DNNs to capture uncertainty in transition dynamics. In another approach, Nagabandi et al. (2017) combined a learned dynamics network with MPC to initialize the policy network to accelerate learning in model-free deep RL. Chua et al. (2018) extended this idea by introducing a bootstrapped ensemble of probabilistic DNNs to model predictive uncertainty of the learned networks and demonstrating that a pure model-based approach can attain the asymptotic performance of MFRL counterparts. However, the use of MPC to define a policy leads to poor run-time execution and hard to transfer policy across tasks. On the contrary, our framework is much simpler in that we do not employ any extra method to model the dynamics uncertainty into DNNs that are already complicated itself with numerous architectures and hyperparameters, but instead formulate a single, new uncertainty-aware objective for end-to-end optimization.
Ensemble. Another group of work leveraged the learned ensemble of dynamics models to train a policy network. Kurutach et al. (2018) learned a stochastic policy via trust-region policy optimization, and Clavera et al. (2018) casted the policy gradient as a meta-learning adaptation step with respect to each member of the ensemble. Buckman et al. (2018) proposed an algorithm to learn a weighted combination of roll-outs of different horizon lengths, which dynamically interpolates between model-based and model-free learning based on the uncertainty in the model predictions. To our knowledge, this is the closest work in aside from ours, which learns a reward function in addition to the dynamics function. But none of the aforementioned work propagates the uncertainty all the way to the value function and uses the concept of utility function to balance risk and return as in our model.
Finally, ensemble of DNNs also provide a straightforward technique to obtain reliable estimates of predictive uncertainty (Lakshminarayanan et al., 2017) and has been integrated with bootstrap to guide exploration in MFRL (Osband et al., 2016; Janner et al., 2019). While many of the approaches mentioned in this section employ bootstrap to train an ensemble of models, we note that their implementations comprise of reconstructing bootstrap datasets at every training iteration, which effectively trains every single data sample and thus diminishes the advantage on uncertainty quantification achieved through bootstrapping. Except for a novel objective formulation, our model is different in that, to maintain online bootstrapped datasets across ensembles, it adds each incoming data sample to a dataset according to a Poisson probability distribution (Park et al., 2007; Qin et al., 2013), thereby guaranteeing asymptotically consistent those datasets.
3 UNCERTAINTY-AWARE MODEL-BASED POLICY OPTIMIZATION
3.1 BACKGROUND
Consider a discrete-time Markov Decision Process (MDP) defined by a tuple M = {S,A, f, r, γ}, in which S is a state space, A is an action space, f : S ×A→ S is a deterministic (or probabilistic) transition function, r : S × A → R is a deterministic reward function, and γ ∈ (0, 1) is a discount factor. We define the return as sum of the rewards r (st, at) = r (st, π(st)) for t = 0, . . . , T for the whole trajectory (s0, a0, ..., sT , aT ) induced by a policy π : S → A and discounted by γ. Here T ∈ Z+ is a task horizon, which may take a value of∞ for non-episodic environments. The goal of RL is to find an optimal policy π? to maximize the expected return
J(π) = E s0∼S
[V π(s0)] (1)
where the value function is defined as V π(s0) = ∑T−1 t=0 γ
t r(st, π(st)), and the state transition is st+1 = f(st, π(st)), with s0 being randomly chosen from the distribution of s ∈ S. Then if the dynamics function f and the reward function r are given, solving Equation 1 can be done using the Calculus of Variations (Young, 2000) or using Policy Gradient (Sutton et al., 2000) when the control function is parameterized or is finite dimensional.
In RL, however, f and r are often unknown and hence Equation 1 becomes a blackbox optimization problem with an unknown objective function. Following the Bayesian approach commonly used in the blackbox optimization literature (Shahriari et al., 2015), we propose to solve this problem by iteratively learning a probabilistic estimate V̂ of V from data and optimizing the policy according to this approximate model, as detailed in the next section.
3.2 FORMULATION OF UNCERTAINTY-AWARE OPTIMIZATION OBJECTIVE
It is worth noting that any unbiased method would model V̂ (π) as a probabilistic estimate, i.e. V̂ (π) would be a distribution (as opposed to a point estimate) for a given policy π. Optimizing a stochastic objective is, however, not well-defined. Our solution is to transform V̂ into a deterministic utility function that reflects a subjective measure balancing the risk and return. Following Markowitz (1952); Sato et al. (2001); Garcıa & Fernández (2015), we propose a risk-sensitive objective criterion using a linear combination of the mean and the standard deviation of V̂ (π). Formally stated, our objective criterion, which we also call the utility function, now becomes
U(π)(s0) = E s0∼S
[ µ ( V̂ (π)(s0) ) + c× σ ( V̂ (π)(s0) )] , (2)
where µ and σ are respectively the mean and the standard deviation of V̂ (π)(s0), and c is a constant that represents the subjective risk preference of the learning agent. A positive risk preference infers that the agent is adventurous while a negative risk preference indicates that the agent has a safe exploration strategy. To our best knowledge, this uncertainty-aware model-based objective function has not been used in the RL literature.
3.3 EMPIRICAL ESTIMATE OF VALUE FUNCTION
Section 3.2 provides a general framework for policy optimization under uncertainty, assuming the availability of the estimation model V̂ (π) of the true value function V (π). In this section, we
describe how to estimate V̂ (π) with a model-based approach. The main idea is to approximate the functions {f, r} with probabilistic parametric models {f̂ , r̂} and fully propagate the estimated uncertainty when planning under each policy π from an initial state s0. The value function estimate V̂ can be formulated as
V̂ (π)(s0) = T−1∑ t=0 γtr̂ (ŝt, π(ŝt)) , (3)
where ŝ0 = s0 and ŝt+1 = f̂(ŝt, π(ŝt)) for t = 0, . . . , T − 1. Next, we describe how to efficiently model {f, r} with well-calibrated uncertainty and a rollout technique that allows the uncertainty to be faithfully propagated into V̂ (π).
3.3.1 BOOTSTRAP SETUP FOR MODEL LEARNING
Following the traditional bootstrap methodology, the empirical model function f̂ is represented as {f̂φk(st, at) → st+1}Bk=1. For simplicity of implementation, we model each bootstrap replica as deterministic and rely on the ensemble as the sole mechanism for quantifying and propagating uncertainty. Each bootstrapped model f̂φk , which is parameterized by φk, learns to minimize the L2 one-step prediction loss over the respective bootstrapped dataset Dk:
min φk:=17→B
E(st,at,st+1)∼Dk‖st+1 − f̂φk(st, at)‖ 2 2. (4)
The training dataset D, from which the bootstrapped datasets {Dk}Bk=1 are sampled, stores the transitions on which the agent has experienced. Since each model observes its own subset of the real data samples, the predictions across the ensemble remain sufficiently diverse in the early stages of the learning and will then converge to their true values as the error of the individual networks decreases.
In addition to model estimation and unlike many other model-based approaches, we also learn the reward function along the same design of classical MBRL algorithms Sutton (1991). But in POUM, we use a deterministic model (also parameterized by a DNN) for the reward function to simplify the policy evaluation.
3.3.2 BOOTSTRAP ROLLOUT
In this section, we describe how to propagate the estimates with uncertainty from the dynamics model to evaluate a policy π. We represent our policy πθ : S → A as a neural network parameterized by θ . Note that we choose to represent our policy as deterministic. We argue that while all estimation models, including that of the dynamics and of the value function, need to be stochastic (i.e. uncertainty-aware), the policy does not need to be. The policy is not an estimator and deterministic policy simply means that the agent is consistent when taking an action, no matter how uncertain it may know about the world.
Given a deterministic policy πθ and an initial state s0 ∈ D, we can estimate the distribution of V (π)(s0) by simulating πθ through each each bootstrapped dynamics model. And since each bootstrap model is an independent approximator of the dynamics function, by expanding the value function via these dynamics approximators, we eventually obtain independent estimates of that value function. Finally, those separate and independent trajectories collectively form an ensemble estimator of V .
In practice, we sample these trajectories with a finite horizon H < T . It is still a challenge to expand the value function estimation for a very long horizon due to a few reasons. First, DNNs training becomes harder when the depth increases. Second, despite our best effort to control the uncertainty, we still do not have a guarantee that our uncertainty modeling is perfectly calibrated, which in turn may be problematic if the planning horizon is too large. Finally, policy learning time is proportional to the rollout horizon.
3.4 POLICY GRADIENT
Based on Equation 2, the optimization target to optimize based on policy gradient method is:
argmax θ
J(θ) = Es∼S [Uθ(s)] , (5)
where Uθ(s) = µ(V̂θ(s)) + c × σ(V̂θ(s)). Using the ensemble method and the rollout technique described above, we can naturally compute µ(V̂θ(s)) and σ(V̂θ(s)) for a given policy πθ and for a given state s. Therefore, the policy πθ can be updated using the SGD or a variance of it.
Importantly, in terms of implementation, it is worth noting that the aforementioned rollout method also allows for easily expressing U(θ) in Equation 5 as a single computational graph of θ. This makes it straightforward to compute the policy gradient ∇θUθ(s) using automatic differentiation, a feature provided by default in most popular deep learning toolkits.
4 ALGORITHM SUMMARY
Algorithm 1 Policy Optimization with Uncertainty-aware Model (POUM)
1: Initialize a training dataset D, bootstrapped datasets {Di}Bi=1, parameterized bootstrapped models {f̂i}Bi=1, a parameterized reward model r̂φ, and a parameterized deterministic policy πθ. 2: while not done do 3: • Step in the environment, collect new data point (s, a, s′, r) and push into D, 4: • Sample from D and push data into the bootstrapped replay buffers: for each member ith in the ensemble, add zi ∼ Poisson(1) copies of that data point to Di, 5: • Update {f̂i}Bi=1 on Di and r̂φ on D using SGD, 6: • Evaluate V̂θ(s) and Uθ(s) by simulating through the learned models {f̂i}Bi=1 and r̂φ, 7: • Update πθ using SGD with the policy gradient being backpropagated on Es [Uθ(s)] through the learned models. 8: end while
We summarize our framework POUM in Algorithm 1 and later in this section, we will also highlight some important details in our implementation.
4.1 DYNAMICS MODEL LEARNING WITH ONLINE BOOTSTRAP
As discussed in Section 1, there are several prior attempts to learn uncertainty-aware dynamics models such as GPs, Bayesian neural networks (NNs), dropout NNs and ensemble of NNs. In this work, however, we employ an ensemble of bootstrapped DNNs. Bootstrap is a generic, principled and statistical approach for uncertainty quantification. Furthermore, as will be also later explained in Section 3.4, this ensembling approach also gives rise to easy gradient computation.
4.1.1 ONLINE BOOTSTRAP FOR TRAINING DATA
Bootstrap learning is often studied in the context of batch learning. However, since our agent updates its empirical model F̂ after each physical step for the best possible sample efficiency, we follow an online bootstrapping method by sampling from Poisson distribution (Oza, 2005; Qin et al., 2013). This is a very effective online approximation to batch bootstrapping, and can be easily done by this simple rule: bootstrapping a dataset D with n examples means sampling n examples from D with replacement. In detail, each example i will appear zi times in the bootstrapped sample where zi is a random variable whose distribution is Binom(n, 1/n) because during resampling, the ith example will have n chances to be picked, each with probability 1/n. This Binom(n, 1/n) distribution converges to Poisson(1) when n → ∞. Therefore, for each new data point, this method adds zk copies of that data point to the bootstrapped dataset Dk, where zk is sampled from a Poisson(1).
Online off-policy learning. Except for the initialization step (we may initialize the models with batch training from off-policy data), our model learning is an online learning process. For each time
step, the learning cost stays constant and does not grow over time, which is required for lifelong learning. Despite being online, the learning is off-policy because we maintain a bootstrapped replay buffer for each model in the ensemble. For each model update, we sample a minibatch of training data from the respective replay buffer. In addition, as mentioned, the models can also be initialized from existing data even before the policy optimization starts.
4.1.2 LINEARLY-WEIGHTED SAMPLING FROM BOOTSTRAPPED TRAINING DATA
Since our replay buffers are accumulated online, a naive uniformly sampling strategy would lead to early data being sampled more frequently than the later ones. We thus propose a linearly weighted random sampling scheme to mitigate this early-data bias issue. In this sampling scheme, example ith is randomly sampled with weight i, i.e. higher weights for the fresher examples in each online update step. Despite its simplicity, this scheme plays an important role in data bias removal, as shown in Appendix A.1.
5 EXPERIMENT
Our experiments are designed to help 1) compare our POUM framework with other SoTA approaches and 2) investigate the impact of the risk factor in our utility function on guiding agents.
5.1 COMPARISON TO BASELINE ALGORITHMS
Experimental Design. We evaluate the performance of our POUM algorithm on four continuous control tasks including: one classic control task (Pendulum-v0) and three other tasks in the MuJoCo simulator (Todorov et al., 2012) from OpenAI Gym (Brockman et al., 2016). It is important to note that, we keep the default configurations prodived by OpenAI Gym (See Appendix A.2.1) and also does not assume access to the reward function as some recent works in model-based reinforcement learning (Chua et al., 2018; Clavera et al., 2018; Kurutach et al., 2018).
For the baselines, we compare POUM to the following SoTA algorithms designed for continuous control: MBPO (Janner et al., 2019), DDPG (Lillicrap et al., 2015), SAC (Haarnoja et al., 2018), STEVE (Buckman et al., 2018). For each one of them, we evaluate the learned policy after every episode. The evaluation is done by running the current policy on 20 random episodes and then computing the average return over them.
Results. Figure 5.1 shows that POUM has a sample efficiency compared to the baseline algorithms across a wide range of environments. Furthermore, it also has the asymptotic performance competitive to or even better than that of the model-free counterparts. Note that, there are horizontal parts at the beginning of evaluation curves in some algorithms and environments, that because these algorithms take random exploration at the beginning of training (as their default configuration) to initialize dynamics. For simple environments: Pendulum-v0, Reacher-v2, Push-v2, our POUM can get a good performance without initialized dynamics1.
However, Figure 5.1 also shows that the performance of POUM in more complex environment like HalfCheetah-v2 is sensitive to random seeds. We hypothesize that this is due to the impact of riskpreference value on policy optimization framework and our strategy of aggressive online learning and linearly weighted batch sampling. The ablation study below validates our current analysis on these hypotheses.
5.2 ABLATION STUDY
To obtain a better understanding about the role of the subjective risk preference in the utility function, we conduct an ablation study on the parameter c that controls this risk factor (Equation 2) and make the following observations.
As illustrated in Figure 5.2, as the first observation, complete zero risk is not a good choice. This behavior is expected because no risk means no uncertainty is quantified properly, leading to agents
1MBPO failed to attain a good performance for Reacher-v2 and Pusher-v2, regardless our best effort to produce the results based on the authors’ official repository.
could not learn well to model the dynamics, as well as optimize an efficient policy. In another observation, POUM performs best with the risks c = −1, follows by c = −2 and c = −3, while it gets worse and worse at both directions, the risk factor goes either higher or lower. This phenomenon is because the risk factor controls the scale of standard deviation, and hence the variance. Too low or too high risks, consequently, imply too much variance which is not favorable in many cases because
it indudes agents to explore more aggressively and hence suffer more potential failures, while not exploiting current, safer experiences. Finally and interestingly, as the scale of the risk changes, both directions are not behaving the same. In particular, POUM gets bad results with positive riskpreference value, and even can not learn with high positive value. That is because in current work, we use fixed the subjective risk preference value and at the beginning of learning process, the dynamics models are unstable and high variance, with high positive risk-preference values, policy learning strange decisions. In contrast, with negative risk-preference, our utility function work as lower confidence bound that keep policy in a safe region. The figure indicates that with lower negative risk-preference value, the learning curve is more stable. However, lower risk-preference value means that less exploration, and results in lower final reward.
6 DISCUSSION AND CONCLUSION
In summary, this paper proposed a new approach in MBRL in which we developed a novel objective function that balances the mean and variance in the estimation of the value function, which is induced by the model. Our experiments suggest that our POUM algorithm not only can achieve the asymptotic performance of model-free methods in challenging continuous control tasks and compared to other SoTA approaches, it does so in much fewer samples. We further demonstrate that the model bias issue in model-based RL can be dealt with effectively with principled and careful uncertainty quantification, by guiding agents with a subjective risk factor. Unlike other methods, quantifying and controlling the uncertainty with a novel uncertainty-aware objective function, and without any complex designs for other components is an advantage, of being simple yet efficient, compared with others.
Nonetheless, we acknowledge that our current implementation for POUM still has several limitations, such as high variance in the empirical performance, which still depends on many hyperparameters (plan horizon, risk sensitivity, and all hyper-parameters associated with neural networks training techniques) and even depends on random seeds. It is, however, worth noting that these traits are not unique to our methods. In spite of this limitation, the results indicate that if implemented properly, MBRL methods can be both sample efficient and have better asymptotic performances than the MFRL counterparts on challenging tasks. In addition, by explicitly representing both the dynamics model and the policy, POUM enables transfer learning, not just for the world (dynamics) model but also for the policy.
To sum up, we identify that sample efficiency, off-policy learning, and transferability are three necessary, albeit not sufficient, properties for real-world reinforcement learning. We claim that our method meets these criteria and hence is a step towards real-world reinforcement learning.
A APPENDIX
A.1 WHY LINEARLY WEIGHTED RANDOM SAMPLING IS A FAIRER SAMPLING SCHEME
Consider the following online learning process: for each time step, we need to randomly sample an example from the accumulating dataset. Suppose that at time t, each example ith is randomly sampled with weight w(t, i). Note that at each time t, we have a total of t examples in the dataset. Then the probability of that example being sampled is
w(t, i)∑t k=1 w(t, k) .
If we use uniformly random sampling then the expected number of times an example ith gets selected until time t is
Cti = t∑ k=i 1 k .
Hence, for all t, for i > j, Cti is larger than C t j by ∑j k=i 1 k . Now, if we use a linearly weighted random sampling scheme, in which w(t, i) = i, then the expected number of times an example ith gets selected until time t is
Cti = t∑ k=i
2i
k(k + 1) = 2 t∑ k=i ( i k − i k + 1 ) = 2− 2 i t .
We can see that at time t, Cti is still larger than C t j for i < j but by weighting recent examples more in each online update step, we reduce the overall early-data bias.
A.2 EXPERIMENTAL SETTINGS
A.2.1 ENVIRONMENTS
Table 1: Description of the environment used for testing
Environment State dimension Action dimension Task horizon
Reacher-v2 11 2 50 Pusher-v2 23 27 100 Pendulum-v0 3 1 200 HalfCheetah-v2 23 6 1000
A.2.2 NETWORK ARCHITECTURE
A.2.3 HYPER-PARAMETER SETTINGS | 1. What are the main concerns of the reviewer regarding the paper's content?
2. What is the novel contribution of the paper in terms of uncertainty-aware model-based policy optimization?
3. How does the reviewer assess the related work section, particularly on risk-sensitive reinforcement learning?
4. Are there any questions or suggestions regarding the algorithm summary and experiment section?
5. What are some minor typos and suggestions mentioned by the reviewer? | Review | Review
# Introduction
The motivation of this paper relies on the dynamics model not being accurate enough, which leads to compounding errors. Hence, a proper characterization of the uncertainty is needed. However, the model-based methods that suffer the most of this problem are the ones that are build on top of policy gradients (since they need to predict the entire trajectory) (e.g., [6]). The methods that learn a Q-value function from the model do not suffer as much from this problem since they just predict shorter horizons. Current model-based RL methods that learn a Q or value-function take into account the uncertainty (i.e., STEVE, MBPO). Those methods are not “less competitive in terms of asymptotic performance.”
There has been work on learning a parametric policy from MPC. Therefore, you can extract a parametric policy from the optimization that MPC performs. The statement “Not being able to explicitly represent the policy makes it hard to transfer the learned policy to other tasks or to initialize agents with an existing better-than-random policy” is not true. The MPC will transfer better to other tasks that have the same dynamics, since it is not task specific. Given the learned dynamics model and the reward function you can act optimally in any new task as long as the learned dynamics are valid.
# Related work
My main concern with the related work section is that there a lot of literature on risk sensitive and optimism in the face of uncertainty (which is a subset of your method when c>0) in control, bandits, and some on *reinforcement learning* that has been neglected.
# Uncertainty-Aware Model-Based Policy Optimization
As said before, risk-sensitive in reinforcement learning has been done before and there’s even more work on control and bandits. For instance in [1] (page 5, paragraph (b)) has the same equation and they discuss the effect of the constant being negative or positive. More recent work has also used similar formulations [2].
This section mostly contains previous work, e.g., bootstrap rollout, policy gradient, using a deterministic policy ([3, 4, 5]). One thing that it’s still not clear from reading the paper, are you backpropagating the through the dynamics model, are you using a policy gradient method (REINFORCE, TRPO, PPO,… )?
# Algorithm Summary
One of the novelties introduced is the fact that the data used in each model comes from sampling a Poisson variable. However, this is not ablated in the results sections. Is it necessary? [6] Claims that there’s no need to use different data for the learned models.
# Experiment
The experiment section lacks from more complex environments, in this case the most complex is half-cheetah. Furthermore, given that 3 of the tasks are short horizon tasks you should probably also compare against model-based methods that build on top of policy gradients (e.g., [6]).
It seems that some choices in the algorithm are not ablated: 1) use of poisson, 2) use of deterministic vs stochastic policy, 3) Is there a single risk that works across environments? Which environments are risk prone/adverse? 4) How about having c ~ N(0, 1), effectively modelling V as a gaussian?
-----------------------------------
Overall, the paper is not mature enough to be accepted: there is not enough novelty, and the results lack of novelty, enough delta in performance from prior work, and have high variance.
------------------------------------
Minor/Typos:
First paragraph: “trying model the transition”
What does it mean that the accuracy is not satisfied?
Why the related work on deep model-based reinforcement learning is called Deep Neural Networks?
3.2 third paragraph: “Next we provide a convergence convergence …”
3.3.2, first paragraph: “no matter how uncertain it may know about the world”
Why the axis in the results section mean different things?
[1] Risk-sensitive Reinforcement Learning. Yun Shen, Michael J. Tobia, Tobias Sommer, Klaus Obermayer.
[2] Plan Online, Learn Offline: Efficient Learning and Exploration via Model-Based Control. Kendall Lowrey, Aravind Rajeswaran, Sham Kakade, Emanuel Todorov, Igor Mordatch
[3] Continuous control with deep reinforcement learning. Timothy P. Lillicrap et. al.
[4] Model-Based Value Estimation for Efficient Model-Free Reinforcement Learning. Vladimir Feinberg, Alvin Wan, Ion Stoica, Michael I. Jordan, Joseph E. Gonzalez, Sergey Levine
[5] Sample-Efficient Reinforcement Learning with Stochastic Ensemble Value Expansion. Jacob Buckman, Danijar Hafner, George Tucker, Eugene Brevdo, Honglak Lee.
[6] Model-Ensemble Trust-Region Policy Optimization. Thanard Kurutach, Ignasi Clavera, Yan Duan, Aviv Tamar, Pieter Abbeel. |
ICLR | Title
Meta-Learning General-Purpose Learning Algorithms with Transformers
Abstract
Modern machine learning requires system designers to specify aspects of the learning pipeline, such as losses, architectures, and optimizers. Meta-learning, or learning-to-learn, instead aims to learn those aspects, and promises to unlock greater capabilities with less manual effort. One particularly ambitious goal of meta-learning is to train general-purpose learning algorithms from scratch, using only black-box models with minimal inductive bias. Such a model takes in training data, and produces test-set predictions across a wide range of problems, without any explicit definition of an inference model, training loss, or optimization algorithm. In this paper we show that Transformers and other black-box models can be meta-trained to act as general-purpose in-context learners. We characterize phase transitions between algorithms that generalize, algorithms that memorize, and algorithms that fail to meta-train at all, induced by changes in model size, number of tasks, and meta-optimization. We further show that the capabilities of meta-trained algorithms are bottlenecked by the accessible state size (memory) determining the next prediction, unlike standard models which are thought to be bottlenecked by parameter count. Finally, we propose practical interventions such as biasing the training distribution that improve the meta-training and meta-generalization of general-purpose learning algorithms.
1 INTRODUCTION
Meta-learning is the process of automatically discovering new learning algorithms instead of designing them manually (Schmidhuber, 1987). An important quality of human-engineered learning algorithms is their applicability to a wide range of tasks or environments. For learning-to-learn to exceed those capabilities, the meta-learned learning algorithms must be similarily general-purpose. Recently, there has been significant progress toward this goal (Kirsch et al., 2019; Oh et al., 2020). The improved generality of the discovered learning algorithms has been achieved by introducing inductive bias, such as by bottlenecking the architecture or by hiding information, which encourage learning over memorization. Methods include restricting learning rules to use gradients (Metz et al., 2019; Kirsch et al., 2019; Oh et al., 2020), symbolic graphs (Real et al., 2020; Co-Reyes et al., 2021), or parameter sharing or symmetries (Kirsch & Schmidhuber, 2020; Kirsch et al., 2021).
While enabling generalization, these inductive biases come at the cost of increasing the effort to design these systems and potentially restrict the space of discoverable learning algorithms. Instead, we seek to explore general-purpose meta-learning systems with minimal inductive bias. Good candidates for this are black-box sequence-models as meta-learners such as LSTMs (Hochreiter et al., 2001; Wang et al., 2016; Duan et al., 2016) or Transformers (Vaswani et al., 2017). These memorybased or in-context learners take in training data and produce test-set predictions without any explicit definition of an inference model, training loss, or optimization algorithm. This has lead to strong few-shot learning ability within the context of, for example, language modeling (Brown et al., 2020).
In this work, we investigate how such black-box meta-learners can be trained to (meta-)generalize and learn on significantly different datasets than used during meta-training. For this we propose a Transformer-based General-Purpose In-Context Learner (GPICL) which is described with an associated meta-training task distribution in Section 3. In Section 4.1 we characterize transitions— induced by scaling the number of tasks or the model size used for meta-training—between memorization, learning, and generalization. We further show in Section 4.2 that the capabilities of metatrained algorithms are bottlenecked by their accessible state size determining the next prediction
(such as the hidden state size in a recurrent network), unlike standard models which are thought to be bottlenecked by parameter count. Finally, in Section 4.3, we propose practical interventions that improve the meta-training of general purpose learning algorithms.
2 BACKGROUND
What is a (supervised) learning algorithm? In this paper, we focus on the setting of metalearning supervised learning algorithms. Consider a mapping(
{xi, yi}NDi=1, x ′ ) 7→ y′ (1)
from the training (support) set D = {xi, yi}NDi=1 and a query input x′ to the query’s prediction y′ where xi, x′ ∈ RNx , yi, y′ ∈ RNy and ND, Nx, Ny ∈ N+. The subset of these functions that qualify as learning algorithms are those that improve their predictions y′ given an increasingly larger training set D. Meta-learning then corresponds to finding these functions via meta-optimization. As in other black-box meta-learning models, we use a neural network to represent such functions.
What is a general-purpose learning algorithm? A learning algorithm can be considered generalpurpose if it learns on a wide range of possible tasks D and their respective related queries x′, y′. For example, gradient-descent on a suitable loss function can be considered a very general-purpose human-engineered learning algorithm (where the gradient is obtained via backpropagation or other means).
3 GENERAL-PURPOSE IN-CONTEXT LEARNING WITH TRANSFORMERS
Due to the small number of inductive biases in black-box models, we can only expect (meta)generalization when meta-training with an appropriately broad data distribution. Thus, changes in the data distribution affect whether and how a model meta-learns and meta-generalizes. We classify algorithms along two different dimensions: To what extent it learns (improving predictions given increasingly larger training sets), and to what extent it generalizes (performs well on instances, tasks, or datasets not seen before). Algorithms can then be categorized as follows:
Learning Generalization Algorithm Description Seen Tasks Unseen Tasks
7 No 7 No Instance memorization
3 Yes 7 No System identification /Task memorization
7 No 3 Yes Zero-shot generalization
3 Yes 3 Yes General-purposelearning algorithm
Pe rf
or m
an ce
Examples seen
We demonstrate that sharp phase transitions occur between these learning modalities, and empirically investigate these transitions.
3.1 GENERATING TASKS FOR LEARNING-TO-LEARN
Neural networks are known to require datasets of significant size to effectively generalize. While in standard supervised learning large quantities of data are common, meta-learning algorithms may require a similar number of distinct tasks in order to learn and generalize. Unfortunately, the number of commonly available tasks is orders of magnitudes smaller compared to the datapoints in each task.
Previous work has side-stepped this issue by building-in architectural or algorithmic structure into the learning algorithm, in effect drastically reducing the number of tasks required. For example, in Kirsch & Schmidhuber (2020); Kirsch et al. (2021), the authors included symmetries into the
black-box model in the form of input and output permutation invariances. An alternative to this is the generation of new tasks (Schmidhuber, 2013; Clune, 2019; Such et al., 2020; Parker-Holder et al., 2022). Unfortunately, it is not easy to generate a wide range of tasks that are both diverse and contain structure as it can be found in the real world.
In this work, we take an intermediate step by augmenting existing datasets, in effect increasing the breadth of the task distribution based on existing task regularities. We generate a large number of tasks by taking existing supervised learning datasets, randomly projecting their inputs
and permuting their classification labels. While the random projection removes spatial structure from the inputs, this structure is not believed to be central to the task (for instance, the performance of SGD-trained fully connected networks is invariant to projection by a random orthogonal matrix (Wadia et al., 2021)). Task augmentation allows us to investigate fundamental questions about learning-to-learn in the regime of many tasks without relying on huge amounts of existing tasks or elaborate schemes to generate those.
A task or dataset D is then defined by its
corresponding base dataset D̄ = {x̄i, ȳi}, (linear) projectionA ∈ RNx×Nx , withAij ∼ N
(
0, 1Nx
)
, and output permutation ρ, D = {Ax̄i, ρ(ȳi)}. Unless noted otherwise, the distribution over output permutations p(ρ) is uniform.
3.2 META-LEARNING
Given those generated tasks, we then meta-train jointly on a mini-batch sampled from the whole distribution. We minimize J(θ), the sum of losses on the query prediction after observing any prefix of a dataset D sampled from the augmented task distribution p(D)
J(θ) = ED∼p(D) ND∑ j=1 l(fθ(D1:j−1, xj), yj) , (2) where in the classification setting, l is the cross entropy loss between the label yj and prediction y′ = fθ(D1:j−1, xj), fθ is a neural network mapping to predictions y′ as in Equation 1. During meta-training, we take gradient steps in J(θ) by backpropagation and Adam (Kingma & Ba, 2014). To investigate the effect of the data distribution, we train on various numbers of tasks (Algorithm 1). Finally, we need to choose a black-box model for the function fθ. We use a vanilla Transformer (Vaswani et al., 2017) with learned positional embeddings, visualized in Figure 1. We call it the General-Purpose In-Context Learner (GPICL). Each token corresponds to a concatenated and transformed input xi and one-hot encoded label yi−1 predicting the corresponding logits y′ = yi for the current input x′ = xi. When querying for the first x1, no label for the previous input is available, so we feed a zero vector.
Algorithm 1 Meta-Training for General-Purpose In-Context Learning (GPICL) Require: Base dataset D̄ = {x̄i, ȳi}, Number of tasks K ∈ N+
{A(k)ij }Kk=1 ∼ N (0, 1Nx ) . Sample input projections {ρ(k)}Kk=1 ∼ p(ρ) . Sample output permutations D(k) = {A(k)x̄i, ρ(k)(ȳi)} p(D) := Uniform[{D(k)}Kk=1] while not converged do
θ ← θ − α∇θJ(θ) . Meta-train across tasks p(D) (Equation 2)
Meta-testing At meta-test time, no gradient-based learning is used. Instead, we simply obtain a prediction y′ by evaluating the neural network fθ on the training dataset D and query point x′.
4 EXPERIMENTS ON THE EMERGENCE OF GENERAL LEARNING-TO-LEARN
Multi-task training with standard classifiers Given a task distribution of many different classification tasks, we first ask under what conditions we expect “learning-to-learn” to emerge. We train a single model across many tasks where each task corresponds to a random transformation of the MNIST dataset, but where the MLP only receives a single datapoint instead of a whole sequence as input. This corresponds to ND = 1 in Equation 2. We would expect such a non-sequential classifier to be able to correctly predict for more tasks as its number of parameters increases. When plotting the network capacity against the number of tasks, we indeed observe a linear boundary where an increasing number of tasks can be fit the larger the network (Figure 2a). This is consistent with results in Collins et al. (2016), which found that a constant number of bits about the data distribution can be stored per model parameter, across a variety of model architectures and scales.
Learning-to-learn with large sequential models and data In contrast to the MLP classifier, a sequence model that observes multiple observations and their labels from the same task, could exceed that linear performance improvement by learning at inference time. Indeed, we observe that when switching to a Transformer that can observe a sequence of datapoints before making a prediction about the query, more tasks can be simultaneously fit (Figure 2b). At a certain model size and number of tasks, the model undergoes a phase transition, allowing to generalize to a seemingly unbounded number of tasks. We hypothesize that this is due to switching the prediction strategy from memorization to learning-to-learn. Further, when (meta-)testing the same trained models from the previous experiment on an unseen task (new random transformation of MNIST), they generalize only in the regime of large numbers of tasks and model size (Figure 2c). As an in-context learner, meta-testing does not involve any gradient updates but only running the model in forward mode.
Insight 1: It is possible to learn-to-learn with black-box models Effective learning algorithms can be realized using black-box models with few inductive biases, given sufficient meta-training task diversity and large enough model sizes. To transition to the learning-to-learn regime, we needed at least 213 = 8192 tasks.
In the following, we study learning-to-learn from the perspective of the data distribution, the architecture, and the optimization dynamics. For the data distribution, we look at how the data diversity affects the emergence and phase transitions of learning-to-learn, generalization, and memorization. For architecture, we analyze the role of the model size and state size in various architectures. Finally, we observe challenges in meta-optimization and demonstrate how memorization followed by generalization is an important mechanism that can be facilitated explicitly biasing the data distribution.
4.1 THE LARGE DATA REGIME: GENERALIZATION AND PHASE TRANSITIONS
Simple invariances in data lead to the emergence of learning-to-learn To verify whether the observed generalizing solutions actually implement learning algorithms (opposed to e.g. zero-shot
generalization), we analyze the meta-test time behavior. We plot the accuracy for a given query point given varying numbers of seen examples in Figure 3. As it is typical for learning algorithms, the performance improves given an increasingly large set of seen examples (inputs and labels).
Generalization Naturally, the question arises to what extent these learning algorithms are general. While we have seen generalization to unseen tasks consisting of novel projections of the same dataset, do the learned algorithms also generalize to unseen datasets? In Figure 3 we observe outof-distribution performance on Fashion MNIST after having trained on MNIST (b, blue). In this direction, there is no generalization gap to directly training on Fashion MNIST (b, orange). Similarly, when meta training on Fashion MNIST
and meta testing on MNIST (a, orange) we observe that the learning algorithm generalizes, albeit with a larger generalization gap.
Comparison to other methods Other datasets and baselines are shown in Table 1. In particular, rather than focusing on SOTA, we aim to validate whether methods with less inductive bias (such as our GPICL), can compete with methods that include more biases suitable to learning-to-learn. This includes stochastic gradient descent (SGD) that updates the parameters online after observing each datapoint. MAML (Finn et al., 2017) proceeds like SGD, but uses a meta-learned neural network initialization. Both methods that rely on backpropagation and gradient descent, learn more slowly compared to our Transformer. In the case of MAML, this may be due to the main mechanism being feature reuse (Raghu et al., 2020) which is less useful when training across our wider task distribution. Among methods that do not hard-code gradient descent at meta-test time, we test VSML (Kirsch & Schmidhuber, 2020) that discovered learning algorithms significantly generalizing between tasks. Our GPICL comes surprisingly close to VSML without requiring the associated inductive bias. Finally, we compare to a standard LSTM that is trained with the same inputs as our Transformer. We observe that it performs worse, which we investigate further.
Insight 2: Simple data augmentations are effective for learning-to-learn The generality of the discovered learning algorithm can be controlled via the data distribution. Even when large task distributions are not (yet) naturally available, simple augmentations that promote permutation and scale invariance are effective.
Transitioning from memorization to learning to generalizing When do the found solutions correspond to memorizing, learning, and generalizing solutions? In Figure 4, we plot the accuracy difference between the last and first prediction for a seen task, an unseen task, and an unseen task with a different base dataset. We observe three phases: In the first phase, we memorize each instance, resulting in no within-sequence performance improvement. In the second phase,
we memorize tasks and learn to identify tasks, resulting in a within-sequence improvement confined to seen task instances. In the final and third phase, we observe a more general learning-to-learn, a performance improvement for unseen tasks, even different base datasets (here FashionMNIST). The last transition is very discrete with separate meta-training runs either finding a solution of the task memorization or general learning-to-learn type (see Appendix A.1).
Insight 3: The meta-learned behavior has phase transitions When increasing the number of tasks, the meta-learned behavior transitions from instance memorization, to task identification, to general learning-to-learn.
4.2 ARCHITECTURE: A LARGE STATE IS CRUCIAL FOR LEARNING
In the previous experiments we observed that given sufficient task diversity and model size, Transformers can learn general-purpose learning algorithms. This raises the question how essential the Transformer architecture is and whether other black-box models could be used. We hypothesize that for learning-to-learn the size of the memory at meta-test time (or state more generally) is particularly important in order to be able to store learning progress. Through self-attention, Transformers have a particularly large state. We test this by training several architectures with various state sizes in our meta-learning setting. In Figure 5a, we observe that when we vary the respective hyper-parameters which most influence the state size, we observe that for a specific state size we obtain similar performance of the discovered learning algorithm across architectures. In contrast, these architectures have markedly different numbers of parameters (Figure 5b).
Insight 4: Large state is more crucial than parameter count This suggests that the model size in terms of parameter count plays a smaller role in the setting of learning-to-learn and Transformers have benefited in particular from an increase in state size by self-attention. Beyond learning-to-learn, this likely applies to other tasks that rely on storing large amounts of sequence-specific information.
4.3 CHALLENGES IN META-OPTIMIZATION
Meta-optimization is known to be challenging. Meta gradients (Finn et al., 2017; Xu et al., 2018; Bechtle et al., 2021) and works with parameter sharing or weight updates in their architecture (Kirsch & Schmidhuber, 2020; Pedersen & Risi, 2021; Risi, 2021) observed various difficulties: Slower convergence, local minima, unstable training, or loss plateaus at the beginning of training (see Appendix Figure 18). We show that some of these problems also occur with black-box models and propose effective interventions.
Loss plateaus when meta-learning with black-box models By training across a large number of randomly transformed tasks, memorizing any task-specific information is difficult. Instead, the model is forced to find solutions that are directly learning. We observe that this results in (meta)loss plateaus during meta-training where the loss only decreases slightly for long periods of time (Figure 6a). Only after a large number of steps (here around 35 thousand) does a drop in loss occur. In the loss plateau, the generalization loss increases on unseen tasks from both the same and a different base dataset (Figure 6b). This suggests that being able to first memorize slightly enables the following learning-to-learn phase. Furthermore, we observe that all gradients have a very small norm with exception of the last layer (Appendix Figure 14).
Intervention 1: Increasing the batch size High variance gradients appear to be one reason training trajectories become trapped on the loss plateau (see Appendix Figures 12, 13). This suggests increasing the meta-batch size as a straightforward solution. When plotting various batch sizes against numbers of tasks we obtain three kinds of solutions at the end of meta-training (Figure 7a): (1) Solutions that generalize and learn, (2) Solutions that memorize, and (3) Solutions that are still in the loss plateau (due to maximum of 50 thousand optimization steps). The larger the batch size, the more tasks we can train on without getting stuck in a loss plateau. When plotting the length of the loss plateau against the task batch size (Figure 7b) we observe a power-law relationship with increasing batch sizes decreasing the plateau length. At the same time, the batch size also increases the number of total tasks seen in the plateau (Appendix Figure 15). Thus, this intervention relies on parallelizability. An increase in the number of tasks also increases the plateau length (Figure 7c). This may be due to a larger number of tasks making the initial memorization phase more difficult.
Intervention 2: Changes in the meta-optimizer Given that many gradients in the loss plateau have very small norm, Adam would rescale those element-wise, potentially alleviating the issue. In practice, we observe that the gradients are so small that the in Adam’s gradient-rescaling de-
nominator (for numerical stability) limits the up-scaling of small gradients. Using smaller results in more than halving the plateau length. Alternatively, discarding the magnitude of the gradient entirely by applying the sign operator to an exponential moving average of the gradient (replacing Adam’s approximate magnitude normalization with direct magnitude normalization) has a similar effect while also increasing the numerical stability over Adam with small (Appendix Figure 16).
Intervention 3: Biasing the data distribution / Curricula GPICL mainly relies on the data distribution for learning-to-learn. This enables a different kind of intervention: Biasing the data distribution. The approach is inspired by the observation that before leaving the loss plateau the model memorizes biases in the data. Instead of sampling label permutations uniformly at random, we bias towards a specific permutation by using a fixed permutation for a fraction of each batch. This completely eliminates the loss plateau, enabling a smooth path from memorizing to learning (Figure 8). Surprisingly, even when heavily biasing the distribution, memorization is followed by generalization. This biased data distribution can be viewed as a curriculum, solving an easier problem first that enables the subsequent harder learning-to-learn. Further investigation is re-
quired to understand how this transition occurs. This may be connected to grokking (Power et al., 2022) which we investigate in Appendix A.7. We hypothesize that many natural data distributions— including language—contain such sub-tasks that are easy to memorize followed by generalization.
4.4 COMBINING DOMAIN-SPECIFIC AND GENERAL-PURPOSE LEARNING
We demonstrated the feasibility of metalearning in-context learning algorithms that are general-purpose. An even more useful learning algorithm would be capable of both generalizing, as well as leveraging domain-specific information for learning when it is available. This would allow for considerably more efficient in-context learning, scaling to more difficult datasets without very long input sequences. Toward this goal, we investigate a simple scheme that leverages pre-trained neural networks as features to learn upon. This could be from an unsupervised learner or a frozen large language model (Radford et al., 2021; Tsimpoukelli et al., 2021). Here, we first project the inputs x̄i of a base-dataset D̄ into some latent space using a pre-trained network, and then proceed with meta-training and metatesting as before, randomly projecting these alternative features. For the pre-trained network, we use a ResNet trained on ImageNet and remove its final layer. In Figure 9 we have meta-trained GPICL on MNIST either with the randomly transformed raw inputs or randomly transformed embedded features. At meta-testtime the learning algorithm generalizes to a
wide range of datasets, measured by the meta-test accuracy of the 100th example. At the same time, the pre-trained ImageNet helps to accelerate learning on datasets that have a matching domain,
such as CIFAR10. We observe that with only 100 examples, the learning algorithm meta-trained on MNIST, can achieve about 45% accuracy on CIFAR10.
5 RELATED WORK
Inductive biases in meta-learning Meta-learning approaches exist with a wide range of inductive biases, usually inspired by existing human-engineered learning algorithms. Some methods prewire the entire learning algorithm (Finn et al., 2017), pre-wire backpropagation and the structure of a gradient-based optimizer (Andrychowicz et al., 2016; Metz et al., 2019; 2020a), or hard-code gradient-based optimization but learn the loss function (Houthooft et al., 2018; Kirsch et al., 2019; Bechtle et al., 2021). Many methods search over hyper-parameters that alter existing learning algorithms (Xu et al., 2018; Metz et al., 2020b; Chen et al., 2022). Fast weight programmers or hypernetworks update the weights of the same or another neural network (Schmidhuber, 1992; 1993a; Ha et al., 2017; Irie et al., 2021; Sandler et al., 2021; Kirsch & Schmidhuber, 2022; Zhmoginov et al., 2022). Our work aims to keep such inductive biases to a minimum.
General-purpose meta-learning There has been growing interest in meta-learning more generalpurpose learning algorithms. The improved generality of the discovered learning algorithm has been achieved by introducing inductive bias, such as by bottlenecking the architecture or by hiding information, encouraging learning over memorization. Methods include enforcing learning rules to use gradients (Metz et al., 2019; Kirsch et al., 2019; Oh et al., 2020), symbolic graphs (Real et al., 2020; Co-Reyes et al., 2021), or parameter sharing and symmetries (Kirsch & Schmidhuber, 2020; Kirsch et al., 2021). Parameter sharing and symmetries have also been discussed in the context of self-organization (Tang & Ha, 2021; Risi, 2021; Pedersen & Risi, 2022).
Black-box meta-learning: MetaRNNs, RL2, in-context learning In contrast to these inductive biases, neural networks can also learn-to-learn purely in their activations with little architectural and algorithmic biases (Hochreiter et al., 2001; Wang et al., 2016; Duan et al., 2016; Ortega et al., 2019). This requires a feedback signal in the inputs that allows for learning such as the reward in reinforcement learning or label in supervised learning (Schmidhuber, 1993b). While a frequently used architecture is the LSTM (Hochreiter & Schmidhuber, 1997; Gers et al., 2000), this mechanism has also seen substantial recent attention in Transformer models (Brown et al., 2020; Chan et al., 2022) under the name of in-context learning. We refer to these networks simply as black-box meta learners. Our method GPICL is in the class of these black-box meta learners. In contrast to previous methods, GPICL implements general-purpose learning algorithms. Independently, Garg et al. (2022) recently studied generalization on synthetic functions compared to our augmented datasets. PFNs (Mikulik et al., 2020) demonstrated learning to learn on small tabular datasets when metatraining on synthetically generated problems. Experiments on more complex classification settings such as Omniglot relied on fine-tuning. In comparison, our method investigated generalization of learning algorithms directly to datasets such as MNIST, Fashion MNIST, and CIFAR10.
6 DISCUSSION AND CONCLUSION
By generating tasks from existing datasets, we demonstrated that black-box models such as Transformers can be used to meta-learn general-purpose in-context learning algorithms (GPICL). We observed that learning-to-learn arises in the regime of large models and large numbers of tasks with several phase transitions from instance memorization, to system identification, to general learning. The size of the memory or model state significantly determines how well any architecture can learn how to learn across various neural network architectures. We identified difficulties in metaoptimization and proposed interventions in terms of optimizers, hyper-parameters, and a biased data distribution acting as a curriculum. We believe our findings open up new possibilities of data-driven general-purpose meta-learning with minimal inductive bias.
A current limitation is the applicability of the discovered learning algorithms to arbitrary input and output sizes. Appropriate tokenization to unified representations may solve this (Chowdhery et al., 2022). Furthermore, learning algorithms often process millions of inputs before outputting the final model. In the current black-box setting, this is still difficult to achieve. Recurrency-based models usually suffer from accumulating errors, whereas Transformer’s computational complexity grows quadratically in the sequence length.
A APPENDIX
A.1 SOLUTIONS ARE MEMORIZING OR GENERALIZING
When do the found solutions correspond to memorizing vs generalizing solutions? In Figure 2 we observe a fairly discrete transition between memorizing and generalizing solutions as a function of the number of tasks. To analyze this transition, we perform multiple training runs with varying seeds and numbers of tasks in Figure 10, reporting the final training loss. We find that the distribution is bi-modal. Solutions at the end of training are memorizing or generalizing. Memorization cluster: The larger the number of tasks, the more difficult it is to memorize all of them with a fixed model capacity. Generalization cluster: At a certain number of tasks (here 6 thousand), a transition point is reached where optimization sometimes discovers a lower training loss that corresponds to a generalizing solution. For larger numbers of tasks the solutions always settle in the generalizing cluster.
A.2 WHAT CORRESPONDS
TO STATE (MEMORY) IN VARIOUS ARCHITECTURES?
We hypothesize that for learning-to-learn the size of the memory NS at meta-test time (or state more generally) is particularly important in order to be able to store learning progress. We test this by training several architectures with various NS in our meta-learning setting. Memory in the context of recurrent neural networks corresponds to the hidden state or context vector of size NH , thus NS ∈ O(NH). More generally, we can describe the state as the information bottleneck that the sequence has to pass through before making predictions. In the context of learning-to-learn, this state has to hold information about everything that has been learned so far. Standard learning algorithms such as neural networks trained via SGD would have a state that corresponds to the neural network parameters, iteratively updated via SGD. In transformers, self-attention allows for a particularly large state of NS ∈ O(NKNLNT ) where NK is the size of key, value, and query, NL is the number of layers, and NT is the length of the sequence.
A.3 SUMMARY OF INSIGHTS
Insight 1: It is possible to learn-to-learn with black-box models Effective learning algorithms can be realized using black-box models with few inductive biases, given sufficient meta-training task diversity and large enough model sizes. To transition to the learning-to-learn regime, we needed at least 213 = 8192 tasks.
Insight 2: Simple data augmentations are effective for general learning-to-learn The generality of the discovered learning algorithm can be controlled via the data distribution. Even when large task distributions are not (yet) naturally available, simple augmentations that promote permutation and scale invariance are effective.
Insight 3: The meta-learned behavior has phase transitions When increasing the number of tasks, the meta-learned behavior transitions from instance memorization, to task identification, to general learning-to-learn. The last transition is discrete, with two unique clusters.
Insight 4: Large state is more crucial than parameter count We conclude that the specific inductive biases of each architecture matter to a smaller degree. The driving factor behind their ability to learn how to learn is the size of their state. Furthermore, this suggests that the model size in terms of numbers of parameters plays a smaller role in the setting of learning-to-learn and Transformers have benefited in particular from an increase in state size by self-attention. In nonmeta-learning sequence tasks parameter count is thought to be the performance bottleneck (Collins et al., 2016). Beyond learning-to-learn, this likely applies to other tasks that rely on processing and storing large amounts of sequence-specific information.
A.4 LIMITATIONS
Varying input and output sizes Compared to some previous works in meta-learning (Andrychowicz et al., 2016; Finn et al., 2017; Kirsch & Schmidhuber, 2020), the discovered learning algorithms are not applicable to an arbitrary input and output size which makes it more difficult to apply the learning algorithm to a new, unseen problem. This problem also applies to Transformers applied to multiple tasks and modalities. Related work has solved this problem by tokenizing inputs to compatible, unified representations (Chowdhery et al., 2022). We expect these techniques or others to be useful in the learning-to-learn context too.
Processing large datasets Learning algorithms often process millions of inputs before outputting the final model. In the black-box setting, this is still difficult to achieve. Recurrency-based models usually suffer from accumulating errors, whereas Transformers computational complexity grows quadratically in the sequence length. Additional work is required to build models capable of processing and being trained on long sequences. Alternatively, parallel processing, similar to batching in learning algorithms, may be a useful building block.
A.5 ARCHITECTURAL DETAILS AND HYPER-PARAMETERS
Transformer details By default, all Transformers have a key, value, and query size of 32, 8 heads, and 4 layers, and model size of NM = 256. The model size defines the dimensionality of each token, and the MLP between layers scales this size up to a hidden representation of 4×NM where NM corresponds to the model size.
Outer-product LSTM We slightly modify an LSTM by replacing the context state with an outerproduct update and inner-product read-out.
x a n d h = j n p . c o n c a t e n a t e ( [ i n p u t s , p r e v s t a t e . h idd en ] , a x i s =−1)
g a t e d = hk . L i n e a r (8 ∗ s i z e ∗ s e l f . num heads ) ( x a n d h ) g a t e d = g a t e d . r e s h a p e ( ( b a t c h s i z e , s e l f . num heads , 8 ∗ s i z e ) ) g a t e d = c h e c k p o i n t n a m e ( ga ted , ’ g a t e d ’ )
# i = i n p u t , g = c e l l g a t e , f = f o r g e t g a t e , # q = query , o = o u t p u t g a t e s i z e s = (3 ∗ s i z e , 3 ∗ s i z e , s i z e , s i z e ) i n d i c e s = np . cumsum ( s i z e s [ : −1 ] ) k1 , k2 , q , o = j n p . s p l i t ( ga ted , i n d i c e s , a x i s =−1) s c a l e = j a x . nn . s o f t p l u s (
hk . g e t p a r a m e t e r ( ’ k e y s c a l e ’ , shape = ( ) , d t y p e =k1 . d type , i n i t = j n p . z e r o s ) )
i , g , f = j n p . e insum ( ’ bhki , bhkj−>k b h i j ’ , j a x . nn . t a n h ( s p l i t a x i s ( k1 , ( 3 , s i z e ) ) ) ∗ s c a l e , j a x . nn . t a n h ( s p l i t a x i s ( k2 , ( 3 , s i z e ) ) ) ) f = j a x . nn . s igmoid ( f + 1) # F or ge t b i a s c = f ∗ p r e v s t a t e . c e l l + j a x . nn . s igmoid ( i ) ∗ g r e a d = j n p . einsum ( ’ b h i j , bhi−>b h j ’ , c , q ) h = hk . F l a t t e n ( ) ( j a x . nn . s igmoid ( o ) ∗ j n p . t a n h ( r e a d ) )
VSML We use a version of VSML with a single layer and self-messages (Kirsch et al., 2021) of size 8. Each LSTM has a hidden size of 16. For each LSTM update we use two micro-ticks. We train on 225 tasks with a 90% biased permutation distribution. The task batch size is 8. All images are scaled to a size of 32× 32× 3 VSML without symmetries Before activations are fed to a standard instantiation of VSML, all inputs are projected using a learnable linear projection. Logits are generated using another linear projection, followed by a softmax. We use a version of VSML with a single layer and selfmessages (Kirsch et al., 2021) of size 8. The LSTMs are on a grid of k × k LSTMs, where k ∈ {1, 2, 4, 8, 16, 24}. Each LSTM has a hidden size of 64. For each LSTM update we use two micro-ticks. We train on 225 tasks with a 90% biased permutation distribution. The task batch size is 128. All images are scaled to a size of 14× 14.
LSTM For the results in Table 1, we used a hidden size of 256 and 105 optimization steps. Larger hidden sizes were harder to optimize. We train on 225 tasks with a 90% biased permutation distribution. The task batch size is 128. All images are scaled to a size of 32× 32× 3
A.6 EXPERIMENTAL DETAILS
Most experiments can be run on a single GPU, some require 16 GPUs due to sequence length and large batch sizes, with sufficient GPU memory (around 16 GB each). Some experiments, such as Figure 2, require up to 1000 runs of that kind to produce the final heat-map.
Input normalization Each dataset is z-normalized by its mean and standard deviation across all examples and pixels.
Number of seeds and shading If not noted otherwise, line plots use 8 seeds for meta-training and at least 512 seeds for meta-testing. Shading indicates 95% confidence intervals.
Figure 2 The MLP has two hidden layers of varying size with relu activations. The Transformer has the default parameters as defined above.
Figure 3 We use a transformer model with a model size of 256. We train on 225 tasks with a 90% biased permutation distribution. The task batch size is 128. All images are scaled to a size of 32× 32× 3 Inputs are z-normalized across the dataset and all input dimensions.
Table 1 The SGD baseline was obtained by sweeping over learning rates from 10−4 to 0.5, optimizers SGD, Adam and Adam with weight decay, one or two layers, and hidden sizes of 32, 64, or 128 on MNIST. The best configuration (most sample efficient) corresponds to a learning rate of 10−3, Adam, and no hidden layers. SGD performs updates online on each one out of the 100 data points. MAML is equivalent to SGD up to the difference that we meta-train the weight initialization according to Equation 2 where θ are the initial parameters of the classifier that is then updated using SGD at meta-test time. All black-box approaches do not use gradient descent at meta-test time. All meta-learning approaches where meta-trained and tuned via grid search on MNIST.
Figure 10 We trained a Transformer with model size 64 and 32 seeds for each number-of-tasksconfiguration.
Figure 4 Input normalization is disabled.
Figure 5 The Transformer uses a task batch size of 512.
Figure 6 Trained on 216 tasks generated from FashionMNIST with labels fully permuted.
Figure 7 Trained on 216 tasks generated from FashionMNIST with labels fully permuted.
Figure 8 Trained on 216 tasks generated from FashionMNIST with label permutations varied.
A.7 ADDITIONAL EXPERIMENTS
Sequence length In all experiments of the main paper we have meta-trained on a sequence length (number of examples) of 100. This is a small training dataset compared to many human-engineered learning algorithms. In general, as long as the learning algorithm does not overfit the training data, more examples should increase the predictive performance. In Figure 11 we investigate how our model scales to longer sequence lengths. We observe that the final accuracy of the last query in the sequence consistently increases with longer sequences. The generalization to longer sequences than those seen during meta-training is another important direction for future work.
Gradient and update statistics To better understand the properties of the loss plateau, we visualize different statistics of the gradients, optimizer, and updates. In Figure 12, we track the exponential moving average statistics of Adam before the loss plateau and after (dashed vertical line). In Figure 13 we investigate how gradients differ between settings with a plateau and settings with a biased distribution where the plateau is avoided. We plot the cosine similarity between consecutive optimization steps, the gradient L2-norm, and the similarity and norm of the weight updates after normalization with Adam. The statistics are plotted cumulatively or smoothed with a Gaussian filter for better readability. The gradient and update cosine similarity differ only marginally between cases with a plateau and cases without. We observe that the gradient L2-norm in the plateau is half as big
as in the biased distribution case, although the updates that Adam applies are going towards zero. This also results in not moving far from parameter initialization when in the plateau. We hypothesize this has to do with varying gradient norms when looking at individual parameter tensors (Figure 14). We observe that the gradients have a small norm for most tensors, except for the last layer.
Batch size and number of tasks influence on plateau length Instead of looking at the plateau length in terms of the number of steps (Figure 7), we may also be concerned with the total number of tasks seen within the plateau. This is relevant in particular when the task batch is not processed fully in parallel but gradients are accumulated. Figure 15 shows the same figure but with the number of tasks in the plateau on the y-axis instead. It can be observed that larger batch-sizes actually increase the data requirement to leave the plateau, despite decreasing the plateau in terms of the number of optimization steps. Similarly, a larger task training distribution requires a larger number of tasks to be seen within the plateau.
Adjusting Adam’s or changing the optimizer As discussed in the main paper and visualized in Figure 16b, decreasing significantly shortens the plateau. This is due to the rescaling of very small gradient magnitudes being limited by . At the same time it incurs some instability. Directly normalizing the gradient by applying the sign function element-wise (Figure 16a) to the exponential gradient average shortens the plateau even further.
When memorization happens, can we elicit grokking? In Figure 7a we have seen that an insufficiently large task distribution can lead to memorization instead of general learning-to-learn. At the same time, Figure 8 showed that biasing the data distribution is helpful to avoid loss plateaus. Power et al. (2022) observed a phenomenon which they called “grokking” in which even after having converged in terms of training loss, test loss may suddenly decrease. Large amounts of regularization, like weight decay with a coefficient of 1.0 were found to facilitate this behavior. Is grokking
connected to the optimization behavior we observe, and if so, do similar interventions help in our setting? We look in particular at the boundary of memorization and generalization (214 = 16384) where doubling the number of tasks a few more times would lead to generalization. Figure 17 shows three task settings, 210, 214, 216, and three different weight decay coefficients, 0.01, 0.1, 1.0. The setting of 216 tasks shows generalization by default and only serves as a baseline for the weight decay coefficient analysis. In the cases of memorization due to too few tasks, we have not been able to produce grokking behavior.
Optimization difficulties in VSML Previous work has observed several optimization difficulties: Slower convergence, local minima, unstable training, or loss plateaus at the beginning of training. Figure 18 shows some of these difficulties in the context of VSML (Kirsch & Schmidhuber, 2020). Because VSML has permutation invariance built into the architecture as an inductive bias, changing the number of tasks has only a small effect. We observe that in particular deeper architectures make meta-optimization more difficult. | 1. What is the main contribution of the paper regarding the training process and qualities of a black-box meta-learning algorithm?
2. What are the strengths of the paper, particularly in its empirical study and experimental design?
3. What are the weaknesses of the paper regarding its generalizability and limitations?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This work performs a detailed investigation into the training process and qualities of a black-box meta-learning algorithm in a 'general-purpose learning' framework. The focus is on a setup where the data is a collection of 'tasks', that consist of a sequence of (input, label) pairs for which the learner needs to predict the current label, given the prefix of labeled pairs. Hence, a transformer was chosen as a model that can process the task sequentially. The tasks are generated as randomly projected and label-permuted MNIST images, and the dynamics of the standard meta-training scheme are studied empirically, mainly by monitoring the accuracy signal over training (seen), testing (unseen) and other-domain (Fashion-MNIST) data. The training is claimed to go through stages of 'memorization' (first of instances, then tasks) and finally 'generalization' or 'learning-to-learn' to unseen or cross-domain tasks. The main emphasis is on the transition between memorization and generalization - when does it occur (if at all) and how it is influenced by the properties of training and structure of the network. Mainly, the data size (number of tasks) and model state-size (rather than parameter count). The phase transition is also studied in terms of the loss function, which reaches a plateau before it occurs (or doesn't), and several improvements in training are suggested in order to promote the transition at an earlier stage.
Strengths And Weaknesses
Strengths:
The paper presents a very interesting purely-empirical study of the learning dynamics in 'general purpose black box' meta-learning.
The experiments are well designed to demonstrate in a simple and clear way how learning progresses through the different stages, towards being able to learn-to-learn. Details are all super clear, very nicely presented and well explained, with nice insights about memorization and generalization.
The two findings that are most interesting, in my opinion: (i) How the existence of the phase shift (ability to generalize) is determined, very sharply, by the transformer model size and by the number of tasks. (ii) That the ability to learn is this setting is highly correlated to the state (rather than model) size.
I also find very interesting the in-depth look into what is happening during the (previously observed) training plateau, that is typical before moving to generalization. The 3 suggested 'interventions' in the process are well motivated and demonstrated.
Weaknesses:
Although I find the observed phenomena and their explanations very interesting, I am concerned about how general they are, or whether they are very specific to the very limited setup that was chosen. This, in my view, limits the scope and impact of the findings, and I think there should have been more effort to relate to this point.
This is especially true, as the setup is different from the most standard one in (non-RL) meta-learning, which is few-shot learning. It is closely related, but focuses on the sequential version, rather than a support/query split.
In that regard too, the choice of producing tasks by random projections and orderings of MNIST, rather than the more common subsets of classes of a data-set with large variety of classes like imagenet should be justified.
Also, would these results and conclusions generalize to non-black-box settings? to the transductive one?
Clarity, Quality, Novelty And Reproducibility
As mentioned, the paper is very clear written, analysis and presentation are of great level of detail and all the information needed is provided for reproducibility. |
ICLR | Title
Meta-Learning General-Purpose Learning Algorithms with Transformers
Abstract
Modern machine learning requires system designers to specify aspects of the learning pipeline, such as losses, architectures, and optimizers. Meta-learning, or learning-to-learn, instead aims to learn those aspects, and promises to unlock greater capabilities with less manual effort. One particularly ambitious goal of meta-learning is to train general-purpose learning algorithms from scratch, using only black-box models with minimal inductive bias. Such a model takes in training data, and produces test-set predictions across a wide range of problems, without any explicit definition of an inference model, training loss, or optimization algorithm. In this paper we show that Transformers and other black-box models can be meta-trained to act as general-purpose in-context learners. We characterize phase transitions between algorithms that generalize, algorithms that memorize, and algorithms that fail to meta-train at all, induced by changes in model size, number of tasks, and meta-optimization. We further show that the capabilities of meta-trained algorithms are bottlenecked by the accessible state size (memory) determining the next prediction, unlike standard models which are thought to be bottlenecked by parameter count. Finally, we propose practical interventions such as biasing the training distribution that improve the meta-training and meta-generalization of general-purpose learning algorithms.
1 INTRODUCTION
Meta-learning is the process of automatically discovering new learning algorithms instead of designing them manually (Schmidhuber, 1987). An important quality of human-engineered learning algorithms is their applicability to a wide range of tasks or environments. For learning-to-learn to exceed those capabilities, the meta-learned learning algorithms must be similarily general-purpose. Recently, there has been significant progress toward this goal (Kirsch et al., 2019; Oh et al., 2020). The improved generality of the discovered learning algorithms has been achieved by introducing inductive bias, such as by bottlenecking the architecture or by hiding information, which encourage learning over memorization. Methods include restricting learning rules to use gradients (Metz et al., 2019; Kirsch et al., 2019; Oh et al., 2020), symbolic graphs (Real et al., 2020; Co-Reyes et al., 2021), or parameter sharing or symmetries (Kirsch & Schmidhuber, 2020; Kirsch et al., 2021).
While enabling generalization, these inductive biases come at the cost of increasing the effort to design these systems and potentially restrict the space of discoverable learning algorithms. Instead, we seek to explore general-purpose meta-learning systems with minimal inductive bias. Good candidates for this are black-box sequence-models as meta-learners such as LSTMs (Hochreiter et al., 2001; Wang et al., 2016; Duan et al., 2016) or Transformers (Vaswani et al., 2017). These memorybased or in-context learners take in training data and produce test-set predictions without any explicit definition of an inference model, training loss, or optimization algorithm. This has lead to strong few-shot learning ability within the context of, for example, language modeling (Brown et al., 2020).
In this work, we investigate how such black-box meta-learners can be trained to (meta-)generalize and learn on significantly different datasets than used during meta-training. For this we propose a Transformer-based General-Purpose In-Context Learner (GPICL) which is described with an associated meta-training task distribution in Section 3. In Section 4.1 we characterize transitions— induced by scaling the number of tasks or the model size used for meta-training—between memorization, learning, and generalization. We further show in Section 4.2 that the capabilities of metatrained algorithms are bottlenecked by their accessible state size determining the next prediction
(such as the hidden state size in a recurrent network), unlike standard models which are thought to be bottlenecked by parameter count. Finally, in Section 4.3, we propose practical interventions that improve the meta-training of general purpose learning algorithms.
2 BACKGROUND
What is a (supervised) learning algorithm? In this paper, we focus on the setting of metalearning supervised learning algorithms. Consider a mapping(
{xi, yi}NDi=1, x ′ ) 7→ y′ (1)
from the training (support) set D = {xi, yi}NDi=1 and a query input x′ to the query’s prediction y′ where xi, x′ ∈ RNx , yi, y′ ∈ RNy and ND, Nx, Ny ∈ N+. The subset of these functions that qualify as learning algorithms are those that improve their predictions y′ given an increasingly larger training set D. Meta-learning then corresponds to finding these functions via meta-optimization. As in other black-box meta-learning models, we use a neural network to represent such functions.
What is a general-purpose learning algorithm? A learning algorithm can be considered generalpurpose if it learns on a wide range of possible tasks D and their respective related queries x′, y′. For example, gradient-descent on a suitable loss function can be considered a very general-purpose human-engineered learning algorithm (where the gradient is obtained via backpropagation or other means).
3 GENERAL-PURPOSE IN-CONTEXT LEARNING WITH TRANSFORMERS
Due to the small number of inductive biases in black-box models, we can only expect (meta)generalization when meta-training with an appropriately broad data distribution. Thus, changes in the data distribution affect whether and how a model meta-learns and meta-generalizes. We classify algorithms along two different dimensions: To what extent it learns (improving predictions given increasingly larger training sets), and to what extent it generalizes (performs well on instances, tasks, or datasets not seen before). Algorithms can then be categorized as follows:
Learning Generalization Algorithm Description Seen Tasks Unseen Tasks
7 No 7 No Instance memorization
3 Yes 7 No System identification /Task memorization
7 No 3 Yes Zero-shot generalization
3 Yes 3 Yes General-purposelearning algorithm
Pe rf
or m
an ce
Examples seen
We demonstrate that sharp phase transitions occur between these learning modalities, and empirically investigate these transitions.
3.1 GENERATING TASKS FOR LEARNING-TO-LEARN
Neural networks are known to require datasets of significant size to effectively generalize. While in standard supervised learning large quantities of data are common, meta-learning algorithms may require a similar number of distinct tasks in order to learn and generalize. Unfortunately, the number of commonly available tasks is orders of magnitudes smaller compared to the datapoints in each task.
Previous work has side-stepped this issue by building-in architectural or algorithmic structure into the learning algorithm, in effect drastically reducing the number of tasks required. For example, in Kirsch & Schmidhuber (2020); Kirsch et al. (2021), the authors included symmetries into the
black-box model in the form of input and output permutation invariances. An alternative to this is the generation of new tasks (Schmidhuber, 2013; Clune, 2019; Such et al., 2020; Parker-Holder et al., 2022). Unfortunately, it is not easy to generate a wide range of tasks that are both diverse and contain structure as it can be found in the real world.
In this work, we take an intermediate step by augmenting existing datasets, in effect increasing the breadth of the task distribution based on existing task regularities. We generate a large number of tasks by taking existing supervised learning datasets, randomly projecting their inputs
and permuting their classification labels. While the random projection removes spatial structure from the inputs, this structure is not believed to be central to the task (for instance, the performance of SGD-trained fully connected networks is invariant to projection by a random orthogonal matrix (Wadia et al., 2021)). Task augmentation allows us to investigate fundamental questions about learning-to-learn in the regime of many tasks without relying on huge amounts of existing tasks or elaborate schemes to generate those.
A task or dataset D is then defined by its
corresponding base dataset D̄ = {x̄i, ȳi}, (linear) projectionA ∈ RNx×Nx , withAij ∼ N
(
0, 1Nx
)
, and output permutation ρ, D = {Ax̄i, ρ(ȳi)}. Unless noted otherwise, the distribution over output permutations p(ρ) is uniform.
3.2 META-LEARNING
Given those generated tasks, we then meta-train jointly on a mini-batch sampled from the whole distribution. We minimize J(θ), the sum of losses on the query prediction after observing any prefix of a dataset D sampled from the augmented task distribution p(D)
J(θ) = ED∼p(D) ND∑ j=1 l(fθ(D1:j−1, xj), yj) , (2) where in the classification setting, l is the cross entropy loss between the label yj and prediction y′ = fθ(D1:j−1, xj), fθ is a neural network mapping to predictions y′ as in Equation 1. During meta-training, we take gradient steps in J(θ) by backpropagation and Adam (Kingma & Ba, 2014). To investigate the effect of the data distribution, we train on various numbers of tasks (Algorithm 1). Finally, we need to choose a black-box model for the function fθ. We use a vanilla Transformer (Vaswani et al., 2017) with learned positional embeddings, visualized in Figure 1. We call it the General-Purpose In-Context Learner (GPICL). Each token corresponds to a concatenated and transformed input xi and one-hot encoded label yi−1 predicting the corresponding logits y′ = yi for the current input x′ = xi. When querying for the first x1, no label for the previous input is available, so we feed a zero vector.
Algorithm 1 Meta-Training for General-Purpose In-Context Learning (GPICL) Require: Base dataset D̄ = {x̄i, ȳi}, Number of tasks K ∈ N+
{A(k)ij }Kk=1 ∼ N (0, 1Nx ) . Sample input projections {ρ(k)}Kk=1 ∼ p(ρ) . Sample output permutations D(k) = {A(k)x̄i, ρ(k)(ȳi)} p(D) := Uniform[{D(k)}Kk=1] while not converged do
θ ← θ − α∇θJ(θ) . Meta-train across tasks p(D) (Equation 2)
Meta-testing At meta-test time, no gradient-based learning is used. Instead, we simply obtain a prediction y′ by evaluating the neural network fθ on the training dataset D and query point x′.
4 EXPERIMENTS ON THE EMERGENCE OF GENERAL LEARNING-TO-LEARN
Multi-task training with standard classifiers Given a task distribution of many different classification tasks, we first ask under what conditions we expect “learning-to-learn” to emerge. We train a single model across many tasks where each task corresponds to a random transformation of the MNIST dataset, but where the MLP only receives a single datapoint instead of a whole sequence as input. This corresponds to ND = 1 in Equation 2. We would expect such a non-sequential classifier to be able to correctly predict for more tasks as its number of parameters increases. When plotting the network capacity against the number of tasks, we indeed observe a linear boundary where an increasing number of tasks can be fit the larger the network (Figure 2a). This is consistent with results in Collins et al. (2016), which found that a constant number of bits about the data distribution can be stored per model parameter, across a variety of model architectures and scales.
Learning-to-learn with large sequential models and data In contrast to the MLP classifier, a sequence model that observes multiple observations and their labels from the same task, could exceed that linear performance improvement by learning at inference time. Indeed, we observe that when switching to a Transformer that can observe a sequence of datapoints before making a prediction about the query, more tasks can be simultaneously fit (Figure 2b). At a certain model size and number of tasks, the model undergoes a phase transition, allowing to generalize to a seemingly unbounded number of tasks. We hypothesize that this is due to switching the prediction strategy from memorization to learning-to-learn. Further, when (meta-)testing the same trained models from the previous experiment on an unseen task (new random transformation of MNIST), they generalize only in the regime of large numbers of tasks and model size (Figure 2c). As an in-context learner, meta-testing does not involve any gradient updates but only running the model in forward mode.
Insight 1: It is possible to learn-to-learn with black-box models Effective learning algorithms can be realized using black-box models with few inductive biases, given sufficient meta-training task diversity and large enough model sizes. To transition to the learning-to-learn regime, we needed at least 213 = 8192 tasks.
In the following, we study learning-to-learn from the perspective of the data distribution, the architecture, and the optimization dynamics. For the data distribution, we look at how the data diversity affects the emergence and phase transitions of learning-to-learn, generalization, and memorization. For architecture, we analyze the role of the model size and state size in various architectures. Finally, we observe challenges in meta-optimization and demonstrate how memorization followed by generalization is an important mechanism that can be facilitated explicitly biasing the data distribution.
4.1 THE LARGE DATA REGIME: GENERALIZATION AND PHASE TRANSITIONS
Simple invariances in data lead to the emergence of learning-to-learn To verify whether the observed generalizing solutions actually implement learning algorithms (opposed to e.g. zero-shot
generalization), we analyze the meta-test time behavior. We plot the accuracy for a given query point given varying numbers of seen examples in Figure 3. As it is typical for learning algorithms, the performance improves given an increasingly large set of seen examples (inputs and labels).
Generalization Naturally, the question arises to what extent these learning algorithms are general. While we have seen generalization to unseen tasks consisting of novel projections of the same dataset, do the learned algorithms also generalize to unseen datasets? In Figure 3 we observe outof-distribution performance on Fashion MNIST after having trained on MNIST (b, blue). In this direction, there is no generalization gap to directly training on Fashion MNIST (b, orange). Similarly, when meta training on Fashion MNIST
and meta testing on MNIST (a, orange) we observe that the learning algorithm generalizes, albeit with a larger generalization gap.
Comparison to other methods Other datasets and baselines are shown in Table 1. In particular, rather than focusing on SOTA, we aim to validate whether methods with less inductive bias (such as our GPICL), can compete with methods that include more biases suitable to learning-to-learn. This includes stochastic gradient descent (SGD) that updates the parameters online after observing each datapoint. MAML (Finn et al., 2017) proceeds like SGD, but uses a meta-learned neural network initialization. Both methods that rely on backpropagation and gradient descent, learn more slowly compared to our Transformer. In the case of MAML, this may be due to the main mechanism being feature reuse (Raghu et al., 2020) which is less useful when training across our wider task distribution. Among methods that do not hard-code gradient descent at meta-test time, we test VSML (Kirsch & Schmidhuber, 2020) that discovered learning algorithms significantly generalizing between tasks. Our GPICL comes surprisingly close to VSML without requiring the associated inductive bias. Finally, we compare to a standard LSTM that is trained with the same inputs as our Transformer. We observe that it performs worse, which we investigate further.
Insight 2: Simple data augmentations are effective for learning-to-learn The generality of the discovered learning algorithm can be controlled via the data distribution. Even when large task distributions are not (yet) naturally available, simple augmentations that promote permutation and scale invariance are effective.
Transitioning from memorization to learning to generalizing When do the found solutions correspond to memorizing, learning, and generalizing solutions? In Figure 4, we plot the accuracy difference between the last and first prediction for a seen task, an unseen task, and an unseen task with a different base dataset. We observe three phases: In the first phase, we memorize each instance, resulting in no within-sequence performance improvement. In the second phase,
we memorize tasks and learn to identify tasks, resulting in a within-sequence improvement confined to seen task instances. In the final and third phase, we observe a more general learning-to-learn, a performance improvement for unseen tasks, even different base datasets (here FashionMNIST). The last transition is very discrete with separate meta-training runs either finding a solution of the task memorization or general learning-to-learn type (see Appendix A.1).
Insight 3: The meta-learned behavior has phase transitions When increasing the number of tasks, the meta-learned behavior transitions from instance memorization, to task identification, to general learning-to-learn.
4.2 ARCHITECTURE: A LARGE STATE IS CRUCIAL FOR LEARNING
In the previous experiments we observed that given sufficient task diversity and model size, Transformers can learn general-purpose learning algorithms. This raises the question how essential the Transformer architecture is and whether other black-box models could be used. We hypothesize that for learning-to-learn the size of the memory at meta-test time (or state more generally) is particularly important in order to be able to store learning progress. Through self-attention, Transformers have a particularly large state. We test this by training several architectures with various state sizes in our meta-learning setting. In Figure 5a, we observe that when we vary the respective hyper-parameters which most influence the state size, we observe that for a specific state size we obtain similar performance of the discovered learning algorithm across architectures. In contrast, these architectures have markedly different numbers of parameters (Figure 5b).
Insight 4: Large state is more crucial than parameter count This suggests that the model size in terms of parameter count plays a smaller role in the setting of learning-to-learn and Transformers have benefited in particular from an increase in state size by self-attention. Beyond learning-to-learn, this likely applies to other tasks that rely on storing large amounts of sequence-specific information.
4.3 CHALLENGES IN META-OPTIMIZATION
Meta-optimization is known to be challenging. Meta gradients (Finn et al., 2017; Xu et al., 2018; Bechtle et al., 2021) and works with parameter sharing or weight updates in their architecture (Kirsch & Schmidhuber, 2020; Pedersen & Risi, 2021; Risi, 2021) observed various difficulties: Slower convergence, local minima, unstable training, or loss plateaus at the beginning of training (see Appendix Figure 18). We show that some of these problems also occur with black-box models and propose effective interventions.
Loss plateaus when meta-learning with black-box models By training across a large number of randomly transformed tasks, memorizing any task-specific information is difficult. Instead, the model is forced to find solutions that are directly learning. We observe that this results in (meta)loss plateaus during meta-training where the loss only decreases slightly for long periods of time (Figure 6a). Only after a large number of steps (here around 35 thousand) does a drop in loss occur. In the loss plateau, the generalization loss increases on unseen tasks from both the same and a different base dataset (Figure 6b). This suggests that being able to first memorize slightly enables the following learning-to-learn phase. Furthermore, we observe that all gradients have a very small norm with exception of the last layer (Appendix Figure 14).
Intervention 1: Increasing the batch size High variance gradients appear to be one reason training trajectories become trapped on the loss plateau (see Appendix Figures 12, 13). This suggests increasing the meta-batch size as a straightforward solution. When plotting various batch sizes against numbers of tasks we obtain three kinds of solutions at the end of meta-training (Figure 7a): (1) Solutions that generalize and learn, (2) Solutions that memorize, and (3) Solutions that are still in the loss plateau (due to maximum of 50 thousand optimization steps). The larger the batch size, the more tasks we can train on without getting stuck in a loss plateau. When plotting the length of the loss plateau against the task batch size (Figure 7b) we observe a power-law relationship with increasing batch sizes decreasing the plateau length. At the same time, the batch size also increases the number of total tasks seen in the plateau (Appendix Figure 15). Thus, this intervention relies on parallelizability. An increase in the number of tasks also increases the plateau length (Figure 7c). This may be due to a larger number of tasks making the initial memorization phase more difficult.
Intervention 2: Changes in the meta-optimizer Given that many gradients in the loss plateau have very small norm, Adam would rescale those element-wise, potentially alleviating the issue. In practice, we observe that the gradients are so small that the in Adam’s gradient-rescaling de-
nominator (for numerical stability) limits the up-scaling of small gradients. Using smaller results in more than halving the plateau length. Alternatively, discarding the magnitude of the gradient entirely by applying the sign operator to an exponential moving average of the gradient (replacing Adam’s approximate magnitude normalization with direct magnitude normalization) has a similar effect while also increasing the numerical stability over Adam with small (Appendix Figure 16).
Intervention 3: Biasing the data distribution / Curricula GPICL mainly relies on the data distribution for learning-to-learn. This enables a different kind of intervention: Biasing the data distribution. The approach is inspired by the observation that before leaving the loss plateau the model memorizes biases in the data. Instead of sampling label permutations uniformly at random, we bias towards a specific permutation by using a fixed permutation for a fraction of each batch. This completely eliminates the loss plateau, enabling a smooth path from memorizing to learning (Figure 8). Surprisingly, even when heavily biasing the distribution, memorization is followed by generalization. This biased data distribution can be viewed as a curriculum, solving an easier problem first that enables the subsequent harder learning-to-learn. Further investigation is re-
quired to understand how this transition occurs. This may be connected to grokking (Power et al., 2022) which we investigate in Appendix A.7. We hypothesize that many natural data distributions— including language—contain such sub-tasks that are easy to memorize followed by generalization.
4.4 COMBINING DOMAIN-SPECIFIC AND GENERAL-PURPOSE LEARNING
We demonstrated the feasibility of metalearning in-context learning algorithms that are general-purpose. An even more useful learning algorithm would be capable of both generalizing, as well as leveraging domain-specific information for learning when it is available. This would allow for considerably more efficient in-context learning, scaling to more difficult datasets without very long input sequences. Toward this goal, we investigate a simple scheme that leverages pre-trained neural networks as features to learn upon. This could be from an unsupervised learner or a frozen large language model (Radford et al., 2021; Tsimpoukelli et al., 2021). Here, we first project the inputs x̄i of a base-dataset D̄ into some latent space using a pre-trained network, and then proceed with meta-training and metatesting as before, randomly projecting these alternative features. For the pre-trained network, we use a ResNet trained on ImageNet and remove its final layer. In Figure 9 we have meta-trained GPICL on MNIST either with the randomly transformed raw inputs or randomly transformed embedded features. At meta-testtime the learning algorithm generalizes to a
wide range of datasets, measured by the meta-test accuracy of the 100th example. At the same time, the pre-trained ImageNet helps to accelerate learning on datasets that have a matching domain,
such as CIFAR10. We observe that with only 100 examples, the learning algorithm meta-trained on MNIST, can achieve about 45% accuracy on CIFAR10.
5 RELATED WORK
Inductive biases in meta-learning Meta-learning approaches exist with a wide range of inductive biases, usually inspired by existing human-engineered learning algorithms. Some methods prewire the entire learning algorithm (Finn et al., 2017), pre-wire backpropagation and the structure of a gradient-based optimizer (Andrychowicz et al., 2016; Metz et al., 2019; 2020a), or hard-code gradient-based optimization but learn the loss function (Houthooft et al., 2018; Kirsch et al., 2019; Bechtle et al., 2021). Many methods search over hyper-parameters that alter existing learning algorithms (Xu et al., 2018; Metz et al., 2020b; Chen et al., 2022). Fast weight programmers or hypernetworks update the weights of the same or another neural network (Schmidhuber, 1992; 1993a; Ha et al., 2017; Irie et al., 2021; Sandler et al., 2021; Kirsch & Schmidhuber, 2022; Zhmoginov et al., 2022). Our work aims to keep such inductive biases to a minimum.
General-purpose meta-learning There has been growing interest in meta-learning more generalpurpose learning algorithms. The improved generality of the discovered learning algorithm has been achieved by introducing inductive bias, such as by bottlenecking the architecture or by hiding information, encouraging learning over memorization. Methods include enforcing learning rules to use gradients (Metz et al., 2019; Kirsch et al., 2019; Oh et al., 2020), symbolic graphs (Real et al., 2020; Co-Reyes et al., 2021), or parameter sharing and symmetries (Kirsch & Schmidhuber, 2020; Kirsch et al., 2021). Parameter sharing and symmetries have also been discussed in the context of self-organization (Tang & Ha, 2021; Risi, 2021; Pedersen & Risi, 2022).
Black-box meta-learning: MetaRNNs, RL2, in-context learning In contrast to these inductive biases, neural networks can also learn-to-learn purely in their activations with little architectural and algorithmic biases (Hochreiter et al., 2001; Wang et al., 2016; Duan et al., 2016; Ortega et al., 2019). This requires a feedback signal in the inputs that allows for learning such as the reward in reinforcement learning or label in supervised learning (Schmidhuber, 1993b). While a frequently used architecture is the LSTM (Hochreiter & Schmidhuber, 1997; Gers et al., 2000), this mechanism has also seen substantial recent attention in Transformer models (Brown et al., 2020; Chan et al., 2022) under the name of in-context learning. We refer to these networks simply as black-box meta learners. Our method GPICL is in the class of these black-box meta learners. In contrast to previous methods, GPICL implements general-purpose learning algorithms. Independently, Garg et al. (2022) recently studied generalization on synthetic functions compared to our augmented datasets. PFNs (Mikulik et al., 2020) demonstrated learning to learn on small tabular datasets when metatraining on synthetically generated problems. Experiments on more complex classification settings such as Omniglot relied on fine-tuning. In comparison, our method investigated generalization of learning algorithms directly to datasets such as MNIST, Fashion MNIST, and CIFAR10.
6 DISCUSSION AND CONCLUSION
By generating tasks from existing datasets, we demonstrated that black-box models such as Transformers can be used to meta-learn general-purpose in-context learning algorithms (GPICL). We observed that learning-to-learn arises in the regime of large models and large numbers of tasks with several phase transitions from instance memorization, to system identification, to general learning. The size of the memory or model state significantly determines how well any architecture can learn how to learn across various neural network architectures. We identified difficulties in metaoptimization and proposed interventions in terms of optimizers, hyper-parameters, and a biased data distribution acting as a curriculum. We believe our findings open up new possibilities of data-driven general-purpose meta-learning with minimal inductive bias.
A current limitation is the applicability of the discovered learning algorithms to arbitrary input and output sizes. Appropriate tokenization to unified representations may solve this (Chowdhery et al., 2022). Furthermore, learning algorithms often process millions of inputs before outputting the final model. In the current black-box setting, this is still difficult to achieve. Recurrency-based models usually suffer from accumulating errors, whereas Transformer’s computational complexity grows quadratically in the sequence length.
A APPENDIX
A.1 SOLUTIONS ARE MEMORIZING OR GENERALIZING
When do the found solutions correspond to memorizing vs generalizing solutions? In Figure 2 we observe a fairly discrete transition between memorizing and generalizing solutions as a function of the number of tasks. To analyze this transition, we perform multiple training runs with varying seeds and numbers of tasks in Figure 10, reporting the final training loss. We find that the distribution is bi-modal. Solutions at the end of training are memorizing or generalizing. Memorization cluster: The larger the number of tasks, the more difficult it is to memorize all of them with a fixed model capacity. Generalization cluster: At a certain number of tasks (here 6 thousand), a transition point is reached where optimization sometimes discovers a lower training loss that corresponds to a generalizing solution. For larger numbers of tasks the solutions always settle in the generalizing cluster.
A.2 WHAT CORRESPONDS
TO STATE (MEMORY) IN VARIOUS ARCHITECTURES?
We hypothesize that for learning-to-learn the size of the memory NS at meta-test time (or state more generally) is particularly important in order to be able to store learning progress. We test this by training several architectures with various NS in our meta-learning setting. Memory in the context of recurrent neural networks corresponds to the hidden state or context vector of size NH , thus NS ∈ O(NH). More generally, we can describe the state as the information bottleneck that the sequence has to pass through before making predictions. In the context of learning-to-learn, this state has to hold information about everything that has been learned so far. Standard learning algorithms such as neural networks trained via SGD would have a state that corresponds to the neural network parameters, iteratively updated via SGD. In transformers, self-attention allows for a particularly large state of NS ∈ O(NKNLNT ) where NK is the size of key, value, and query, NL is the number of layers, and NT is the length of the sequence.
A.3 SUMMARY OF INSIGHTS
Insight 1: It is possible to learn-to-learn with black-box models Effective learning algorithms can be realized using black-box models with few inductive biases, given sufficient meta-training task diversity and large enough model sizes. To transition to the learning-to-learn regime, we needed at least 213 = 8192 tasks.
Insight 2: Simple data augmentations are effective for general learning-to-learn The generality of the discovered learning algorithm can be controlled via the data distribution. Even when large task distributions are not (yet) naturally available, simple augmentations that promote permutation and scale invariance are effective.
Insight 3: The meta-learned behavior has phase transitions When increasing the number of tasks, the meta-learned behavior transitions from instance memorization, to task identification, to general learning-to-learn. The last transition is discrete, with two unique clusters.
Insight 4: Large state is more crucial than parameter count We conclude that the specific inductive biases of each architecture matter to a smaller degree. The driving factor behind their ability to learn how to learn is the size of their state. Furthermore, this suggests that the model size in terms of numbers of parameters plays a smaller role in the setting of learning-to-learn and Transformers have benefited in particular from an increase in state size by self-attention. In nonmeta-learning sequence tasks parameter count is thought to be the performance bottleneck (Collins et al., 2016). Beyond learning-to-learn, this likely applies to other tasks that rely on processing and storing large amounts of sequence-specific information.
A.4 LIMITATIONS
Varying input and output sizes Compared to some previous works in meta-learning (Andrychowicz et al., 2016; Finn et al., 2017; Kirsch & Schmidhuber, 2020), the discovered learning algorithms are not applicable to an arbitrary input and output size which makes it more difficult to apply the learning algorithm to a new, unseen problem. This problem also applies to Transformers applied to multiple tasks and modalities. Related work has solved this problem by tokenizing inputs to compatible, unified representations (Chowdhery et al., 2022). We expect these techniques or others to be useful in the learning-to-learn context too.
Processing large datasets Learning algorithms often process millions of inputs before outputting the final model. In the black-box setting, this is still difficult to achieve. Recurrency-based models usually suffer from accumulating errors, whereas Transformers computational complexity grows quadratically in the sequence length. Additional work is required to build models capable of processing and being trained on long sequences. Alternatively, parallel processing, similar to batching in learning algorithms, may be a useful building block.
A.5 ARCHITECTURAL DETAILS AND HYPER-PARAMETERS
Transformer details By default, all Transformers have a key, value, and query size of 32, 8 heads, and 4 layers, and model size of NM = 256. The model size defines the dimensionality of each token, and the MLP between layers scales this size up to a hidden representation of 4×NM where NM corresponds to the model size.
Outer-product LSTM We slightly modify an LSTM by replacing the context state with an outerproduct update and inner-product read-out.
x a n d h = j n p . c o n c a t e n a t e ( [ i n p u t s , p r e v s t a t e . h idd en ] , a x i s =−1)
g a t e d = hk . L i n e a r (8 ∗ s i z e ∗ s e l f . num heads ) ( x a n d h ) g a t e d = g a t e d . r e s h a p e ( ( b a t c h s i z e , s e l f . num heads , 8 ∗ s i z e ) ) g a t e d = c h e c k p o i n t n a m e ( ga ted , ’ g a t e d ’ )
# i = i n p u t , g = c e l l g a t e , f = f o r g e t g a t e , # q = query , o = o u t p u t g a t e s i z e s = (3 ∗ s i z e , 3 ∗ s i z e , s i z e , s i z e ) i n d i c e s = np . cumsum ( s i z e s [ : −1 ] ) k1 , k2 , q , o = j n p . s p l i t ( ga ted , i n d i c e s , a x i s =−1) s c a l e = j a x . nn . s o f t p l u s (
hk . g e t p a r a m e t e r ( ’ k e y s c a l e ’ , shape = ( ) , d t y p e =k1 . d type , i n i t = j n p . z e r o s ) )
i , g , f = j n p . e insum ( ’ bhki , bhkj−>k b h i j ’ , j a x . nn . t a n h ( s p l i t a x i s ( k1 , ( 3 , s i z e ) ) ) ∗ s c a l e , j a x . nn . t a n h ( s p l i t a x i s ( k2 , ( 3 , s i z e ) ) ) ) f = j a x . nn . s igmoid ( f + 1) # F or ge t b i a s c = f ∗ p r e v s t a t e . c e l l + j a x . nn . s igmoid ( i ) ∗ g r e a d = j n p . einsum ( ’ b h i j , bhi−>b h j ’ , c , q ) h = hk . F l a t t e n ( ) ( j a x . nn . s igmoid ( o ) ∗ j n p . t a n h ( r e a d ) )
VSML We use a version of VSML with a single layer and self-messages (Kirsch et al., 2021) of size 8. Each LSTM has a hidden size of 16. For each LSTM update we use two micro-ticks. We train on 225 tasks with a 90% biased permutation distribution. The task batch size is 8. All images are scaled to a size of 32× 32× 3 VSML without symmetries Before activations are fed to a standard instantiation of VSML, all inputs are projected using a learnable linear projection. Logits are generated using another linear projection, followed by a softmax. We use a version of VSML with a single layer and selfmessages (Kirsch et al., 2021) of size 8. The LSTMs are on a grid of k × k LSTMs, where k ∈ {1, 2, 4, 8, 16, 24}. Each LSTM has a hidden size of 64. For each LSTM update we use two micro-ticks. We train on 225 tasks with a 90% biased permutation distribution. The task batch size is 128. All images are scaled to a size of 14× 14.
LSTM For the results in Table 1, we used a hidden size of 256 and 105 optimization steps. Larger hidden sizes were harder to optimize. We train on 225 tasks with a 90% biased permutation distribution. The task batch size is 128. All images are scaled to a size of 32× 32× 3
A.6 EXPERIMENTAL DETAILS
Most experiments can be run on a single GPU, some require 16 GPUs due to sequence length and large batch sizes, with sufficient GPU memory (around 16 GB each). Some experiments, such as Figure 2, require up to 1000 runs of that kind to produce the final heat-map.
Input normalization Each dataset is z-normalized by its mean and standard deviation across all examples and pixels.
Number of seeds and shading If not noted otherwise, line plots use 8 seeds for meta-training and at least 512 seeds for meta-testing. Shading indicates 95% confidence intervals.
Figure 2 The MLP has two hidden layers of varying size with relu activations. The Transformer has the default parameters as defined above.
Figure 3 We use a transformer model with a model size of 256. We train on 225 tasks with a 90% biased permutation distribution. The task batch size is 128. All images are scaled to a size of 32× 32× 3 Inputs are z-normalized across the dataset and all input dimensions.
Table 1 The SGD baseline was obtained by sweeping over learning rates from 10−4 to 0.5, optimizers SGD, Adam and Adam with weight decay, one or two layers, and hidden sizes of 32, 64, or 128 on MNIST. The best configuration (most sample efficient) corresponds to a learning rate of 10−3, Adam, and no hidden layers. SGD performs updates online on each one out of the 100 data points. MAML is equivalent to SGD up to the difference that we meta-train the weight initialization according to Equation 2 where θ are the initial parameters of the classifier that is then updated using SGD at meta-test time. All black-box approaches do not use gradient descent at meta-test time. All meta-learning approaches where meta-trained and tuned via grid search on MNIST.
Figure 10 We trained a Transformer with model size 64 and 32 seeds for each number-of-tasksconfiguration.
Figure 4 Input normalization is disabled.
Figure 5 The Transformer uses a task batch size of 512.
Figure 6 Trained on 216 tasks generated from FashionMNIST with labels fully permuted.
Figure 7 Trained on 216 tasks generated from FashionMNIST with labels fully permuted.
Figure 8 Trained on 216 tasks generated from FashionMNIST with label permutations varied.
A.7 ADDITIONAL EXPERIMENTS
Sequence length In all experiments of the main paper we have meta-trained on a sequence length (number of examples) of 100. This is a small training dataset compared to many human-engineered learning algorithms. In general, as long as the learning algorithm does not overfit the training data, more examples should increase the predictive performance. In Figure 11 we investigate how our model scales to longer sequence lengths. We observe that the final accuracy of the last query in the sequence consistently increases with longer sequences. The generalization to longer sequences than those seen during meta-training is another important direction for future work.
Gradient and update statistics To better understand the properties of the loss plateau, we visualize different statistics of the gradients, optimizer, and updates. In Figure 12, we track the exponential moving average statistics of Adam before the loss plateau and after (dashed vertical line). In Figure 13 we investigate how gradients differ between settings with a plateau and settings with a biased distribution where the plateau is avoided. We plot the cosine similarity between consecutive optimization steps, the gradient L2-norm, and the similarity and norm of the weight updates after normalization with Adam. The statistics are plotted cumulatively or smoothed with a Gaussian filter for better readability. The gradient and update cosine similarity differ only marginally between cases with a plateau and cases without. We observe that the gradient L2-norm in the plateau is half as big
as in the biased distribution case, although the updates that Adam applies are going towards zero. This also results in not moving far from parameter initialization when in the plateau. We hypothesize this has to do with varying gradient norms when looking at individual parameter tensors (Figure 14). We observe that the gradients have a small norm for most tensors, except for the last layer.
Batch size and number of tasks influence on plateau length Instead of looking at the plateau length in terms of the number of steps (Figure 7), we may also be concerned with the total number of tasks seen within the plateau. This is relevant in particular when the task batch is not processed fully in parallel but gradients are accumulated. Figure 15 shows the same figure but with the number of tasks in the plateau on the y-axis instead. It can be observed that larger batch-sizes actually increase the data requirement to leave the plateau, despite decreasing the plateau in terms of the number of optimization steps. Similarly, a larger task training distribution requires a larger number of tasks to be seen within the plateau.
Adjusting Adam’s or changing the optimizer As discussed in the main paper and visualized in Figure 16b, decreasing significantly shortens the plateau. This is due to the rescaling of very small gradient magnitudes being limited by . At the same time it incurs some instability. Directly normalizing the gradient by applying the sign function element-wise (Figure 16a) to the exponential gradient average shortens the plateau even further.
When memorization happens, can we elicit grokking? In Figure 7a we have seen that an insufficiently large task distribution can lead to memorization instead of general learning-to-learn. At the same time, Figure 8 showed that biasing the data distribution is helpful to avoid loss plateaus. Power et al. (2022) observed a phenomenon which they called “grokking” in which even after having converged in terms of training loss, test loss may suddenly decrease. Large amounts of regularization, like weight decay with a coefficient of 1.0 were found to facilitate this behavior. Is grokking
connected to the optimization behavior we observe, and if so, do similar interventions help in our setting? We look in particular at the boundary of memorization and generalization (214 = 16384) where doubling the number of tasks a few more times would lead to generalization. Figure 17 shows three task settings, 210, 214, 216, and three different weight decay coefficients, 0.01, 0.1, 1.0. The setting of 216 tasks shows generalization by default and only serves as a baseline for the weight decay coefficient analysis. In the cases of memorization due to too few tasks, we have not been able to produce grokking behavior.
Optimization difficulties in VSML Previous work has observed several optimization difficulties: Slower convergence, local minima, unstable training, or loss plateaus at the beginning of training. Figure 18 shows some of these difficulties in the context of VSML (Kirsch & Schmidhuber, 2020). Because VSML has permutation invariance built into the architecture as an inductive bias, changing the number of tasks has only a small effect. We observe that in particular deeper architectures make meta-optimization more difficult. | 1. What is the focus of the paper regarding transformers in meta-learning?
2. What are the strengths and weaknesses of the paper's empirical analysis?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any questions or suggestions regarding the paper's findings, interventions, or future improvements? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper performs an extensive empirical analysis of transformers used as general-purpose meta-learning algorithms in which transformers take a sequence of training data and a test points to output a prediction. The authors make a number of interesting findings including phase transitions when a transformer is able to generalize to unseen tasks and when a transformer is able to generalize to unseen instances within a task. The authors also demonstrate the importance of a large state to be able to learn a task. Finally, the authors propose a number of interventions to increase the ease of meta-optimization including increasing meta-batch sizes.
Strengths And Weaknesses
Strengths The paper's empirical analysis is extensive and comprehensive. The authors make a number of interesting observations that to my knowledge have not been made by prior literature. Moreover, the suggestions on interventions to improve meta-optimization are insightful and may be valuable to the community.
Weaknesses One weakness of the paper is that experiments are primarily conducted on simple datasets like MNIST. This work would be more significant if experiments could be conducted on more complex datasets.
Since the paper's main focus is on explaining the properties of transformers as meta-learners, it would be helpful to have an expanded related work section explaining in more detail the previously known properties of transformers.
Finally, the distinction between learning and generalization in the introduction of Section 3 is a little unclear to me. It would be helpful if the authors could provide examples of the 4 algorithms in the table here to make this more concrete.
Clarity, Quality, Novelty And Reproducibility
Quality The empirical analysis in this paper is thorough. The experiments conducted by the authors are well justified, and all claims made by the authors are sufficiently backed up by experiments. As mentioned above, conducting experiments on more complex datasets would increase the significance of the paper.
Clarity The paper is generally well written and figures are well illustrated. As mentioned above, more sharply defining the distinction between learning and generalization early in the paper would be helpful.
Originality To my knowledge, the insights found by the authors in the paper are novel. However, it would be helpful if the authors could more explicitly separate their new contributions from prior knowledge in the literature; as mentioned above the authors may wish to expand their related works section. |
ICLR | Title
Meta-Learning General-Purpose Learning Algorithms with Transformers
Abstract
Modern machine learning requires system designers to specify aspects of the learning pipeline, such as losses, architectures, and optimizers. Meta-learning, or learning-to-learn, instead aims to learn those aspects, and promises to unlock greater capabilities with less manual effort. One particularly ambitious goal of meta-learning is to train general-purpose learning algorithms from scratch, using only black-box models with minimal inductive bias. Such a model takes in training data, and produces test-set predictions across a wide range of problems, without any explicit definition of an inference model, training loss, or optimization algorithm. In this paper we show that Transformers and other black-box models can be meta-trained to act as general-purpose in-context learners. We characterize phase transitions between algorithms that generalize, algorithms that memorize, and algorithms that fail to meta-train at all, induced by changes in model size, number of tasks, and meta-optimization. We further show that the capabilities of meta-trained algorithms are bottlenecked by the accessible state size (memory) determining the next prediction, unlike standard models which are thought to be bottlenecked by parameter count. Finally, we propose practical interventions such as biasing the training distribution that improve the meta-training and meta-generalization of general-purpose learning algorithms.
1 INTRODUCTION
Meta-learning is the process of automatically discovering new learning algorithms instead of designing them manually (Schmidhuber, 1987). An important quality of human-engineered learning algorithms is their applicability to a wide range of tasks or environments. For learning-to-learn to exceed those capabilities, the meta-learned learning algorithms must be similarily general-purpose. Recently, there has been significant progress toward this goal (Kirsch et al., 2019; Oh et al., 2020). The improved generality of the discovered learning algorithms has been achieved by introducing inductive bias, such as by bottlenecking the architecture or by hiding information, which encourage learning over memorization. Methods include restricting learning rules to use gradients (Metz et al., 2019; Kirsch et al., 2019; Oh et al., 2020), symbolic graphs (Real et al., 2020; Co-Reyes et al., 2021), or parameter sharing or symmetries (Kirsch & Schmidhuber, 2020; Kirsch et al., 2021).
While enabling generalization, these inductive biases come at the cost of increasing the effort to design these systems and potentially restrict the space of discoverable learning algorithms. Instead, we seek to explore general-purpose meta-learning systems with minimal inductive bias. Good candidates for this are black-box sequence-models as meta-learners such as LSTMs (Hochreiter et al., 2001; Wang et al., 2016; Duan et al., 2016) or Transformers (Vaswani et al., 2017). These memorybased or in-context learners take in training data and produce test-set predictions without any explicit definition of an inference model, training loss, or optimization algorithm. This has lead to strong few-shot learning ability within the context of, for example, language modeling (Brown et al., 2020).
In this work, we investigate how such black-box meta-learners can be trained to (meta-)generalize and learn on significantly different datasets than used during meta-training. For this we propose a Transformer-based General-Purpose In-Context Learner (GPICL) which is described with an associated meta-training task distribution in Section 3. In Section 4.1 we characterize transitions— induced by scaling the number of tasks or the model size used for meta-training—between memorization, learning, and generalization. We further show in Section 4.2 that the capabilities of metatrained algorithms are bottlenecked by their accessible state size determining the next prediction
(such as the hidden state size in a recurrent network), unlike standard models which are thought to be bottlenecked by parameter count. Finally, in Section 4.3, we propose practical interventions that improve the meta-training of general purpose learning algorithms.
2 BACKGROUND
What is a (supervised) learning algorithm? In this paper, we focus on the setting of metalearning supervised learning algorithms. Consider a mapping(
{xi, yi}NDi=1, x ′ ) 7→ y′ (1)
from the training (support) set D = {xi, yi}NDi=1 and a query input x′ to the query’s prediction y′ where xi, x′ ∈ RNx , yi, y′ ∈ RNy and ND, Nx, Ny ∈ N+. The subset of these functions that qualify as learning algorithms are those that improve their predictions y′ given an increasingly larger training set D. Meta-learning then corresponds to finding these functions via meta-optimization. As in other black-box meta-learning models, we use a neural network to represent such functions.
What is a general-purpose learning algorithm? A learning algorithm can be considered generalpurpose if it learns on a wide range of possible tasks D and their respective related queries x′, y′. For example, gradient-descent on a suitable loss function can be considered a very general-purpose human-engineered learning algorithm (where the gradient is obtained via backpropagation or other means).
3 GENERAL-PURPOSE IN-CONTEXT LEARNING WITH TRANSFORMERS
Due to the small number of inductive biases in black-box models, we can only expect (meta)generalization when meta-training with an appropriately broad data distribution. Thus, changes in the data distribution affect whether and how a model meta-learns and meta-generalizes. We classify algorithms along two different dimensions: To what extent it learns (improving predictions given increasingly larger training sets), and to what extent it generalizes (performs well on instances, tasks, or datasets not seen before). Algorithms can then be categorized as follows:
Learning Generalization Algorithm Description Seen Tasks Unseen Tasks
7 No 7 No Instance memorization
3 Yes 7 No System identification /Task memorization
7 No 3 Yes Zero-shot generalization
3 Yes 3 Yes General-purposelearning algorithm
Pe rf
or m
an ce
Examples seen
We demonstrate that sharp phase transitions occur between these learning modalities, and empirically investigate these transitions.
3.1 GENERATING TASKS FOR LEARNING-TO-LEARN
Neural networks are known to require datasets of significant size to effectively generalize. While in standard supervised learning large quantities of data are common, meta-learning algorithms may require a similar number of distinct tasks in order to learn and generalize. Unfortunately, the number of commonly available tasks is orders of magnitudes smaller compared to the datapoints in each task.
Previous work has side-stepped this issue by building-in architectural or algorithmic structure into the learning algorithm, in effect drastically reducing the number of tasks required. For example, in Kirsch & Schmidhuber (2020); Kirsch et al. (2021), the authors included symmetries into the
black-box model in the form of input and output permutation invariances. An alternative to this is the generation of new tasks (Schmidhuber, 2013; Clune, 2019; Such et al., 2020; Parker-Holder et al., 2022). Unfortunately, it is not easy to generate a wide range of tasks that are both diverse and contain structure as it can be found in the real world.
In this work, we take an intermediate step by augmenting existing datasets, in effect increasing the breadth of the task distribution based on existing task regularities. We generate a large number of tasks by taking existing supervised learning datasets, randomly projecting their inputs
and permuting their classification labels. While the random projection removes spatial structure from the inputs, this structure is not believed to be central to the task (for instance, the performance of SGD-trained fully connected networks is invariant to projection by a random orthogonal matrix (Wadia et al., 2021)). Task augmentation allows us to investigate fundamental questions about learning-to-learn in the regime of many tasks without relying on huge amounts of existing tasks or elaborate schemes to generate those.
A task or dataset D is then defined by its
corresponding base dataset D̄ = {x̄i, ȳi}, (linear) projectionA ∈ RNx×Nx , withAij ∼ N
(
0, 1Nx
)
, and output permutation ρ, D = {Ax̄i, ρ(ȳi)}. Unless noted otherwise, the distribution over output permutations p(ρ) is uniform.
3.2 META-LEARNING
Given those generated tasks, we then meta-train jointly on a mini-batch sampled from the whole distribution. We minimize J(θ), the sum of losses on the query prediction after observing any prefix of a dataset D sampled from the augmented task distribution p(D)
J(θ) = ED∼p(D) ND∑ j=1 l(fθ(D1:j−1, xj), yj) , (2) where in the classification setting, l is the cross entropy loss between the label yj and prediction y′ = fθ(D1:j−1, xj), fθ is a neural network mapping to predictions y′ as in Equation 1. During meta-training, we take gradient steps in J(θ) by backpropagation and Adam (Kingma & Ba, 2014). To investigate the effect of the data distribution, we train on various numbers of tasks (Algorithm 1). Finally, we need to choose a black-box model for the function fθ. We use a vanilla Transformer (Vaswani et al., 2017) with learned positional embeddings, visualized in Figure 1. We call it the General-Purpose In-Context Learner (GPICL). Each token corresponds to a concatenated and transformed input xi and one-hot encoded label yi−1 predicting the corresponding logits y′ = yi for the current input x′ = xi. When querying for the first x1, no label for the previous input is available, so we feed a zero vector.
Algorithm 1 Meta-Training for General-Purpose In-Context Learning (GPICL) Require: Base dataset D̄ = {x̄i, ȳi}, Number of tasks K ∈ N+
{A(k)ij }Kk=1 ∼ N (0, 1Nx ) . Sample input projections {ρ(k)}Kk=1 ∼ p(ρ) . Sample output permutations D(k) = {A(k)x̄i, ρ(k)(ȳi)} p(D) := Uniform[{D(k)}Kk=1] while not converged do
θ ← θ − α∇θJ(θ) . Meta-train across tasks p(D) (Equation 2)
Meta-testing At meta-test time, no gradient-based learning is used. Instead, we simply obtain a prediction y′ by evaluating the neural network fθ on the training dataset D and query point x′.
4 EXPERIMENTS ON THE EMERGENCE OF GENERAL LEARNING-TO-LEARN
Multi-task training with standard classifiers Given a task distribution of many different classification tasks, we first ask under what conditions we expect “learning-to-learn” to emerge. We train a single model across many tasks where each task corresponds to a random transformation of the MNIST dataset, but where the MLP only receives a single datapoint instead of a whole sequence as input. This corresponds to ND = 1 in Equation 2. We would expect such a non-sequential classifier to be able to correctly predict for more tasks as its number of parameters increases. When plotting the network capacity against the number of tasks, we indeed observe a linear boundary where an increasing number of tasks can be fit the larger the network (Figure 2a). This is consistent with results in Collins et al. (2016), which found that a constant number of bits about the data distribution can be stored per model parameter, across a variety of model architectures and scales.
Learning-to-learn with large sequential models and data In contrast to the MLP classifier, a sequence model that observes multiple observations and their labels from the same task, could exceed that linear performance improvement by learning at inference time. Indeed, we observe that when switching to a Transformer that can observe a sequence of datapoints before making a prediction about the query, more tasks can be simultaneously fit (Figure 2b). At a certain model size and number of tasks, the model undergoes a phase transition, allowing to generalize to a seemingly unbounded number of tasks. We hypothesize that this is due to switching the prediction strategy from memorization to learning-to-learn. Further, when (meta-)testing the same trained models from the previous experiment on an unseen task (new random transformation of MNIST), they generalize only in the regime of large numbers of tasks and model size (Figure 2c). As an in-context learner, meta-testing does not involve any gradient updates but only running the model in forward mode.
Insight 1: It is possible to learn-to-learn with black-box models Effective learning algorithms can be realized using black-box models with few inductive biases, given sufficient meta-training task diversity and large enough model sizes. To transition to the learning-to-learn regime, we needed at least 213 = 8192 tasks.
In the following, we study learning-to-learn from the perspective of the data distribution, the architecture, and the optimization dynamics. For the data distribution, we look at how the data diversity affects the emergence and phase transitions of learning-to-learn, generalization, and memorization. For architecture, we analyze the role of the model size and state size in various architectures. Finally, we observe challenges in meta-optimization and demonstrate how memorization followed by generalization is an important mechanism that can be facilitated explicitly biasing the data distribution.
4.1 THE LARGE DATA REGIME: GENERALIZATION AND PHASE TRANSITIONS
Simple invariances in data lead to the emergence of learning-to-learn To verify whether the observed generalizing solutions actually implement learning algorithms (opposed to e.g. zero-shot
generalization), we analyze the meta-test time behavior. We plot the accuracy for a given query point given varying numbers of seen examples in Figure 3. As it is typical for learning algorithms, the performance improves given an increasingly large set of seen examples (inputs and labels).
Generalization Naturally, the question arises to what extent these learning algorithms are general. While we have seen generalization to unseen tasks consisting of novel projections of the same dataset, do the learned algorithms also generalize to unseen datasets? In Figure 3 we observe outof-distribution performance on Fashion MNIST after having trained on MNIST (b, blue). In this direction, there is no generalization gap to directly training on Fashion MNIST (b, orange). Similarly, when meta training on Fashion MNIST
and meta testing on MNIST (a, orange) we observe that the learning algorithm generalizes, albeit with a larger generalization gap.
Comparison to other methods Other datasets and baselines are shown in Table 1. In particular, rather than focusing on SOTA, we aim to validate whether methods with less inductive bias (such as our GPICL), can compete with methods that include more biases suitable to learning-to-learn. This includes stochastic gradient descent (SGD) that updates the parameters online after observing each datapoint. MAML (Finn et al., 2017) proceeds like SGD, but uses a meta-learned neural network initialization. Both methods that rely on backpropagation and gradient descent, learn more slowly compared to our Transformer. In the case of MAML, this may be due to the main mechanism being feature reuse (Raghu et al., 2020) which is less useful when training across our wider task distribution. Among methods that do not hard-code gradient descent at meta-test time, we test VSML (Kirsch & Schmidhuber, 2020) that discovered learning algorithms significantly generalizing between tasks. Our GPICL comes surprisingly close to VSML without requiring the associated inductive bias. Finally, we compare to a standard LSTM that is trained with the same inputs as our Transformer. We observe that it performs worse, which we investigate further.
Insight 2: Simple data augmentations are effective for learning-to-learn The generality of the discovered learning algorithm can be controlled via the data distribution. Even when large task distributions are not (yet) naturally available, simple augmentations that promote permutation and scale invariance are effective.
Transitioning from memorization to learning to generalizing When do the found solutions correspond to memorizing, learning, and generalizing solutions? In Figure 4, we plot the accuracy difference between the last and first prediction for a seen task, an unseen task, and an unseen task with a different base dataset. We observe three phases: In the first phase, we memorize each instance, resulting in no within-sequence performance improvement. In the second phase,
we memorize tasks and learn to identify tasks, resulting in a within-sequence improvement confined to seen task instances. In the final and third phase, we observe a more general learning-to-learn, a performance improvement for unseen tasks, even different base datasets (here FashionMNIST). The last transition is very discrete with separate meta-training runs either finding a solution of the task memorization or general learning-to-learn type (see Appendix A.1).
Insight 3: The meta-learned behavior has phase transitions When increasing the number of tasks, the meta-learned behavior transitions from instance memorization, to task identification, to general learning-to-learn.
4.2 ARCHITECTURE: A LARGE STATE IS CRUCIAL FOR LEARNING
In the previous experiments we observed that given sufficient task diversity and model size, Transformers can learn general-purpose learning algorithms. This raises the question how essential the Transformer architecture is and whether other black-box models could be used. We hypothesize that for learning-to-learn the size of the memory at meta-test time (or state more generally) is particularly important in order to be able to store learning progress. Through self-attention, Transformers have a particularly large state. We test this by training several architectures with various state sizes in our meta-learning setting. In Figure 5a, we observe that when we vary the respective hyper-parameters which most influence the state size, we observe that for a specific state size we obtain similar performance of the discovered learning algorithm across architectures. In contrast, these architectures have markedly different numbers of parameters (Figure 5b).
Insight 4: Large state is more crucial than parameter count This suggests that the model size in terms of parameter count plays a smaller role in the setting of learning-to-learn and Transformers have benefited in particular from an increase in state size by self-attention. Beyond learning-to-learn, this likely applies to other tasks that rely on storing large amounts of sequence-specific information.
4.3 CHALLENGES IN META-OPTIMIZATION
Meta-optimization is known to be challenging. Meta gradients (Finn et al., 2017; Xu et al., 2018; Bechtle et al., 2021) and works with parameter sharing or weight updates in their architecture (Kirsch & Schmidhuber, 2020; Pedersen & Risi, 2021; Risi, 2021) observed various difficulties: Slower convergence, local minima, unstable training, or loss plateaus at the beginning of training (see Appendix Figure 18). We show that some of these problems also occur with black-box models and propose effective interventions.
Loss plateaus when meta-learning with black-box models By training across a large number of randomly transformed tasks, memorizing any task-specific information is difficult. Instead, the model is forced to find solutions that are directly learning. We observe that this results in (meta)loss plateaus during meta-training where the loss only decreases slightly for long periods of time (Figure 6a). Only after a large number of steps (here around 35 thousand) does a drop in loss occur. In the loss plateau, the generalization loss increases on unseen tasks from both the same and a different base dataset (Figure 6b). This suggests that being able to first memorize slightly enables the following learning-to-learn phase. Furthermore, we observe that all gradients have a very small norm with exception of the last layer (Appendix Figure 14).
Intervention 1: Increasing the batch size High variance gradients appear to be one reason training trajectories become trapped on the loss plateau (see Appendix Figures 12, 13). This suggests increasing the meta-batch size as a straightforward solution. When plotting various batch sizes against numbers of tasks we obtain three kinds of solutions at the end of meta-training (Figure 7a): (1) Solutions that generalize and learn, (2) Solutions that memorize, and (3) Solutions that are still in the loss plateau (due to maximum of 50 thousand optimization steps). The larger the batch size, the more tasks we can train on without getting stuck in a loss plateau. When plotting the length of the loss plateau against the task batch size (Figure 7b) we observe a power-law relationship with increasing batch sizes decreasing the plateau length. At the same time, the batch size also increases the number of total tasks seen in the plateau (Appendix Figure 15). Thus, this intervention relies on parallelizability. An increase in the number of tasks also increases the plateau length (Figure 7c). This may be due to a larger number of tasks making the initial memorization phase more difficult.
Intervention 2: Changes in the meta-optimizer Given that many gradients in the loss plateau have very small norm, Adam would rescale those element-wise, potentially alleviating the issue. In practice, we observe that the gradients are so small that the in Adam’s gradient-rescaling de-
nominator (for numerical stability) limits the up-scaling of small gradients. Using smaller results in more than halving the plateau length. Alternatively, discarding the magnitude of the gradient entirely by applying the sign operator to an exponential moving average of the gradient (replacing Adam’s approximate magnitude normalization with direct magnitude normalization) has a similar effect while also increasing the numerical stability over Adam with small (Appendix Figure 16).
Intervention 3: Biasing the data distribution / Curricula GPICL mainly relies on the data distribution for learning-to-learn. This enables a different kind of intervention: Biasing the data distribution. The approach is inspired by the observation that before leaving the loss plateau the model memorizes biases in the data. Instead of sampling label permutations uniformly at random, we bias towards a specific permutation by using a fixed permutation for a fraction of each batch. This completely eliminates the loss plateau, enabling a smooth path from memorizing to learning (Figure 8). Surprisingly, even when heavily biasing the distribution, memorization is followed by generalization. This biased data distribution can be viewed as a curriculum, solving an easier problem first that enables the subsequent harder learning-to-learn. Further investigation is re-
quired to understand how this transition occurs. This may be connected to grokking (Power et al., 2022) which we investigate in Appendix A.7. We hypothesize that many natural data distributions— including language—contain such sub-tasks that are easy to memorize followed by generalization.
4.4 COMBINING DOMAIN-SPECIFIC AND GENERAL-PURPOSE LEARNING
We demonstrated the feasibility of metalearning in-context learning algorithms that are general-purpose. An even more useful learning algorithm would be capable of both generalizing, as well as leveraging domain-specific information for learning when it is available. This would allow for considerably more efficient in-context learning, scaling to more difficult datasets without very long input sequences. Toward this goal, we investigate a simple scheme that leverages pre-trained neural networks as features to learn upon. This could be from an unsupervised learner or a frozen large language model (Radford et al., 2021; Tsimpoukelli et al., 2021). Here, we first project the inputs x̄i of a base-dataset D̄ into some latent space using a pre-trained network, and then proceed with meta-training and metatesting as before, randomly projecting these alternative features. For the pre-trained network, we use a ResNet trained on ImageNet and remove its final layer. In Figure 9 we have meta-trained GPICL on MNIST either with the randomly transformed raw inputs or randomly transformed embedded features. At meta-testtime the learning algorithm generalizes to a
wide range of datasets, measured by the meta-test accuracy of the 100th example. At the same time, the pre-trained ImageNet helps to accelerate learning on datasets that have a matching domain,
such as CIFAR10. We observe that with only 100 examples, the learning algorithm meta-trained on MNIST, can achieve about 45% accuracy on CIFAR10.
5 RELATED WORK
Inductive biases in meta-learning Meta-learning approaches exist with a wide range of inductive biases, usually inspired by existing human-engineered learning algorithms. Some methods prewire the entire learning algorithm (Finn et al., 2017), pre-wire backpropagation and the structure of a gradient-based optimizer (Andrychowicz et al., 2016; Metz et al., 2019; 2020a), or hard-code gradient-based optimization but learn the loss function (Houthooft et al., 2018; Kirsch et al., 2019; Bechtle et al., 2021). Many methods search over hyper-parameters that alter existing learning algorithms (Xu et al., 2018; Metz et al., 2020b; Chen et al., 2022). Fast weight programmers or hypernetworks update the weights of the same or another neural network (Schmidhuber, 1992; 1993a; Ha et al., 2017; Irie et al., 2021; Sandler et al., 2021; Kirsch & Schmidhuber, 2022; Zhmoginov et al., 2022). Our work aims to keep such inductive biases to a minimum.
General-purpose meta-learning There has been growing interest in meta-learning more generalpurpose learning algorithms. The improved generality of the discovered learning algorithm has been achieved by introducing inductive bias, such as by bottlenecking the architecture or by hiding information, encouraging learning over memorization. Methods include enforcing learning rules to use gradients (Metz et al., 2019; Kirsch et al., 2019; Oh et al., 2020), symbolic graphs (Real et al., 2020; Co-Reyes et al., 2021), or parameter sharing and symmetries (Kirsch & Schmidhuber, 2020; Kirsch et al., 2021). Parameter sharing and symmetries have also been discussed in the context of self-organization (Tang & Ha, 2021; Risi, 2021; Pedersen & Risi, 2022).
Black-box meta-learning: MetaRNNs, RL2, in-context learning In contrast to these inductive biases, neural networks can also learn-to-learn purely in their activations with little architectural and algorithmic biases (Hochreiter et al., 2001; Wang et al., 2016; Duan et al., 2016; Ortega et al., 2019). This requires a feedback signal in the inputs that allows for learning such as the reward in reinforcement learning or label in supervised learning (Schmidhuber, 1993b). While a frequently used architecture is the LSTM (Hochreiter & Schmidhuber, 1997; Gers et al., 2000), this mechanism has also seen substantial recent attention in Transformer models (Brown et al., 2020; Chan et al., 2022) under the name of in-context learning. We refer to these networks simply as black-box meta learners. Our method GPICL is in the class of these black-box meta learners. In contrast to previous methods, GPICL implements general-purpose learning algorithms. Independently, Garg et al. (2022) recently studied generalization on synthetic functions compared to our augmented datasets. PFNs (Mikulik et al., 2020) demonstrated learning to learn on small tabular datasets when metatraining on synthetically generated problems. Experiments on more complex classification settings such as Omniglot relied on fine-tuning. In comparison, our method investigated generalization of learning algorithms directly to datasets such as MNIST, Fashion MNIST, and CIFAR10.
6 DISCUSSION AND CONCLUSION
By generating tasks from existing datasets, we demonstrated that black-box models such as Transformers can be used to meta-learn general-purpose in-context learning algorithms (GPICL). We observed that learning-to-learn arises in the regime of large models and large numbers of tasks with several phase transitions from instance memorization, to system identification, to general learning. The size of the memory or model state significantly determines how well any architecture can learn how to learn across various neural network architectures. We identified difficulties in metaoptimization and proposed interventions in terms of optimizers, hyper-parameters, and a biased data distribution acting as a curriculum. We believe our findings open up new possibilities of data-driven general-purpose meta-learning with minimal inductive bias.
A current limitation is the applicability of the discovered learning algorithms to arbitrary input and output sizes. Appropriate tokenization to unified representations may solve this (Chowdhery et al., 2022). Furthermore, learning algorithms often process millions of inputs before outputting the final model. In the current black-box setting, this is still difficult to achieve. Recurrency-based models usually suffer from accumulating errors, whereas Transformer’s computational complexity grows quadratically in the sequence length.
A APPENDIX
A.1 SOLUTIONS ARE MEMORIZING OR GENERALIZING
When do the found solutions correspond to memorizing vs generalizing solutions? In Figure 2 we observe a fairly discrete transition between memorizing and generalizing solutions as a function of the number of tasks. To analyze this transition, we perform multiple training runs with varying seeds and numbers of tasks in Figure 10, reporting the final training loss. We find that the distribution is bi-modal. Solutions at the end of training are memorizing or generalizing. Memorization cluster: The larger the number of tasks, the more difficult it is to memorize all of them with a fixed model capacity. Generalization cluster: At a certain number of tasks (here 6 thousand), a transition point is reached where optimization sometimes discovers a lower training loss that corresponds to a generalizing solution. For larger numbers of tasks the solutions always settle in the generalizing cluster.
A.2 WHAT CORRESPONDS
TO STATE (MEMORY) IN VARIOUS ARCHITECTURES?
We hypothesize that for learning-to-learn the size of the memory NS at meta-test time (or state more generally) is particularly important in order to be able to store learning progress. We test this by training several architectures with various NS in our meta-learning setting. Memory in the context of recurrent neural networks corresponds to the hidden state or context vector of size NH , thus NS ∈ O(NH). More generally, we can describe the state as the information bottleneck that the sequence has to pass through before making predictions. In the context of learning-to-learn, this state has to hold information about everything that has been learned so far. Standard learning algorithms such as neural networks trained via SGD would have a state that corresponds to the neural network parameters, iteratively updated via SGD. In transformers, self-attention allows for a particularly large state of NS ∈ O(NKNLNT ) where NK is the size of key, value, and query, NL is the number of layers, and NT is the length of the sequence.
A.3 SUMMARY OF INSIGHTS
Insight 1: It is possible to learn-to-learn with black-box models Effective learning algorithms can be realized using black-box models with few inductive biases, given sufficient meta-training task diversity and large enough model sizes. To transition to the learning-to-learn regime, we needed at least 213 = 8192 tasks.
Insight 2: Simple data augmentations are effective for general learning-to-learn The generality of the discovered learning algorithm can be controlled via the data distribution. Even when large task distributions are not (yet) naturally available, simple augmentations that promote permutation and scale invariance are effective.
Insight 3: The meta-learned behavior has phase transitions When increasing the number of tasks, the meta-learned behavior transitions from instance memorization, to task identification, to general learning-to-learn. The last transition is discrete, with two unique clusters.
Insight 4: Large state is more crucial than parameter count We conclude that the specific inductive biases of each architecture matter to a smaller degree. The driving factor behind their ability to learn how to learn is the size of their state. Furthermore, this suggests that the model size in terms of numbers of parameters plays a smaller role in the setting of learning-to-learn and Transformers have benefited in particular from an increase in state size by self-attention. In nonmeta-learning sequence tasks parameter count is thought to be the performance bottleneck (Collins et al., 2016). Beyond learning-to-learn, this likely applies to other tasks that rely on processing and storing large amounts of sequence-specific information.
A.4 LIMITATIONS
Varying input and output sizes Compared to some previous works in meta-learning (Andrychowicz et al., 2016; Finn et al., 2017; Kirsch & Schmidhuber, 2020), the discovered learning algorithms are not applicable to an arbitrary input and output size which makes it more difficult to apply the learning algorithm to a new, unseen problem. This problem also applies to Transformers applied to multiple tasks and modalities. Related work has solved this problem by tokenizing inputs to compatible, unified representations (Chowdhery et al., 2022). We expect these techniques or others to be useful in the learning-to-learn context too.
Processing large datasets Learning algorithms often process millions of inputs before outputting the final model. In the black-box setting, this is still difficult to achieve. Recurrency-based models usually suffer from accumulating errors, whereas Transformers computational complexity grows quadratically in the sequence length. Additional work is required to build models capable of processing and being trained on long sequences. Alternatively, parallel processing, similar to batching in learning algorithms, may be a useful building block.
A.5 ARCHITECTURAL DETAILS AND HYPER-PARAMETERS
Transformer details By default, all Transformers have a key, value, and query size of 32, 8 heads, and 4 layers, and model size of NM = 256. The model size defines the dimensionality of each token, and the MLP between layers scales this size up to a hidden representation of 4×NM where NM corresponds to the model size.
Outer-product LSTM We slightly modify an LSTM by replacing the context state with an outerproduct update and inner-product read-out.
x a n d h = j n p . c o n c a t e n a t e ( [ i n p u t s , p r e v s t a t e . h idd en ] , a x i s =−1)
g a t e d = hk . L i n e a r (8 ∗ s i z e ∗ s e l f . num heads ) ( x a n d h ) g a t e d = g a t e d . r e s h a p e ( ( b a t c h s i z e , s e l f . num heads , 8 ∗ s i z e ) ) g a t e d = c h e c k p o i n t n a m e ( ga ted , ’ g a t e d ’ )
# i = i n p u t , g = c e l l g a t e , f = f o r g e t g a t e , # q = query , o = o u t p u t g a t e s i z e s = (3 ∗ s i z e , 3 ∗ s i z e , s i z e , s i z e ) i n d i c e s = np . cumsum ( s i z e s [ : −1 ] ) k1 , k2 , q , o = j n p . s p l i t ( ga ted , i n d i c e s , a x i s =−1) s c a l e = j a x . nn . s o f t p l u s (
hk . g e t p a r a m e t e r ( ’ k e y s c a l e ’ , shape = ( ) , d t y p e =k1 . d type , i n i t = j n p . z e r o s ) )
i , g , f = j n p . e insum ( ’ bhki , bhkj−>k b h i j ’ , j a x . nn . t a n h ( s p l i t a x i s ( k1 , ( 3 , s i z e ) ) ) ∗ s c a l e , j a x . nn . t a n h ( s p l i t a x i s ( k2 , ( 3 , s i z e ) ) ) ) f = j a x . nn . s igmoid ( f + 1) # F or ge t b i a s c = f ∗ p r e v s t a t e . c e l l + j a x . nn . s igmoid ( i ) ∗ g r e a d = j n p . einsum ( ’ b h i j , bhi−>b h j ’ , c , q ) h = hk . F l a t t e n ( ) ( j a x . nn . s igmoid ( o ) ∗ j n p . t a n h ( r e a d ) )
VSML We use a version of VSML with a single layer and self-messages (Kirsch et al., 2021) of size 8. Each LSTM has a hidden size of 16. For each LSTM update we use two micro-ticks. We train on 225 tasks with a 90% biased permutation distribution. The task batch size is 8. All images are scaled to a size of 32× 32× 3 VSML without symmetries Before activations are fed to a standard instantiation of VSML, all inputs are projected using a learnable linear projection. Logits are generated using another linear projection, followed by a softmax. We use a version of VSML with a single layer and selfmessages (Kirsch et al., 2021) of size 8. The LSTMs are on a grid of k × k LSTMs, where k ∈ {1, 2, 4, 8, 16, 24}. Each LSTM has a hidden size of 64. For each LSTM update we use two micro-ticks. We train on 225 tasks with a 90% biased permutation distribution. The task batch size is 128. All images are scaled to a size of 14× 14.
LSTM For the results in Table 1, we used a hidden size of 256 and 105 optimization steps. Larger hidden sizes were harder to optimize. We train on 225 tasks with a 90% biased permutation distribution. The task batch size is 128. All images are scaled to a size of 32× 32× 3
A.6 EXPERIMENTAL DETAILS
Most experiments can be run on a single GPU, some require 16 GPUs due to sequence length and large batch sizes, with sufficient GPU memory (around 16 GB each). Some experiments, such as Figure 2, require up to 1000 runs of that kind to produce the final heat-map.
Input normalization Each dataset is z-normalized by its mean and standard deviation across all examples and pixels.
Number of seeds and shading If not noted otherwise, line plots use 8 seeds for meta-training and at least 512 seeds for meta-testing. Shading indicates 95% confidence intervals.
Figure 2 The MLP has two hidden layers of varying size with relu activations. The Transformer has the default parameters as defined above.
Figure 3 We use a transformer model with a model size of 256. We train on 225 tasks with a 90% biased permutation distribution. The task batch size is 128. All images are scaled to a size of 32× 32× 3 Inputs are z-normalized across the dataset and all input dimensions.
Table 1 The SGD baseline was obtained by sweeping over learning rates from 10−4 to 0.5, optimizers SGD, Adam and Adam with weight decay, one or two layers, and hidden sizes of 32, 64, or 128 on MNIST. The best configuration (most sample efficient) corresponds to a learning rate of 10−3, Adam, and no hidden layers. SGD performs updates online on each one out of the 100 data points. MAML is equivalent to SGD up to the difference that we meta-train the weight initialization according to Equation 2 where θ are the initial parameters of the classifier that is then updated using SGD at meta-test time. All black-box approaches do not use gradient descent at meta-test time. All meta-learning approaches where meta-trained and tuned via grid search on MNIST.
Figure 10 We trained a Transformer with model size 64 and 32 seeds for each number-of-tasksconfiguration.
Figure 4 Input normalization is disabled.
Figure 5 The Transformer uses a task batch size of 512.
Figure 6 Trained on 216 tasks generated from FashionMNIST with labels fully permuted.
Figure 7 Trained on 216 tasks generated from FashionMNIST with labels fully permuted.
Figure 8 Trained on 216 tasks generated from FashionMNIST with label permutations varied.
A.7 ADDITIONAL EXPERIMENTS
Sequence length In all experiments of the main paper we have meta-trained on a sequence length (number of examples) of 100. This is a small training dataset compared to many human-engineered learning algorithms. In general, as long as the learning algorithm does not overfit the training data, more examples should increase the predictive performance. In Figure 11 we investigate how our model scales to longer sequence lengths. We observe that the final accuracy of the last query in the sequence consistently increases with longer sequences. The generalization to longer sequences than those seen during meta-training is another important direction for future work.
Gradient and update statistics To better understand the properties of the loss plateau, we visualize different statistics of the gradients, optimizer, and updates. In Figure 12, we track the exponential moving average statistics of Adam before the loss plateau and after (dashed vertical line). In Figure 13 we investigate how gradients differ between settings with a plateau and settings with a biased distribution where the plateau is avoided. We plot the cosine similarity between consecutive optimization steps, the gradient L2-norm, and the similarity and norm of the weight updates after normalization with Adam. The statistics are plotted cumulatively or smoothed with a Gaussian filter for better readability. The gradient and update cosine similarity differ only marginally between cases with a plateau and cases without. We observe that the gradient L2-norm in the plateau is half as big
as in the biased distribution case, although the updates that Adam applies are going towards zero. This also results in not moving far from parameter initialization when in the plateau. We hypothesize this has to do with varying gradient norms when looking at individual parameter tensors (Figure 14). We observe that the gradients have a small norm for most tensors, except for the last layer.
Batch size and number of tasks influence on plateau length Instead of looking at the plateau length in terms of the number of steps (Figure 7), we may also be concerned with the total number of tasks seen within the plateau. This is relevant in particular when the task batch is not processed fully in parallel but gradients are accumulated. Figure 15 shows the same figure but with the number of tasks in the plateau on the y-axis instead. It can be observed that larger batch-sizes actually increase the data requirement to leave the plateau, despite decreasing the plateau in terms of the number of optimization steps. Similarly, a larger task training distribution requires a larger number of tasks to be seen within the plateau.
Adjusting Adam’s or changing the optimizer As discussed in the main paper and visualized in Figure 16b, decreasing significantly shortens the plateau. This is due to the rescaling of very small gradient magnitudes being limited by . At the same time it incurs some instability. Directly normalizing the gradient by applying the sign function element-wise (Figure 16a) to the exponential gradient average shortens the plateau even further.
When memorization happens, can we elicit grokking? In Figure 7a we have seen that an insufficiently large task distribution can lead to memorization instead of general learning-to-learn. At the same time, Figure 8 showed that biasing the data distribution is helpful to avoid loss plateaus. Power et al. (2022) observed a phenomenon which they called “grokking” in which even after having converged in terms of training loss, test loss may suddenly decrease. Large amounts of regularization, like weight decay with a coefficient of 1.0 were found to facilitate this behavior. Is grokking
connected to the optimization behavior we observe, and if so, do similar interventions help in our setting? We look in particular at the boundary of memorization and generalization (214 = 16384) where doubling the number of tasks a few more times would lead to generalization. Figure 17 shows three task settings, 210, 214, 216, and three different weight decay coefficients, 0.01, 0.1, 1.0. The setting of 216 tasks shows generalization by default and only serves as a baseline for the weight decay coefficient analysis. In the cases of memorization due to too few tasks, we have not been able to produce grokking behavior.
Optimization difficulties in VSML Previous work has observed several optimization difficulties: Slower convergence, local minima, unstable training, or loss plateaus at the beginning of training. Figure 18 shows some of these difficulties in the context of VSML (Kirsch & Schmidhuber, 2020). Because VSML has permutation invariance built into the architecture as an inductive bias, changing the number of tasks has only a small effect. We observe that in particular deeper architectures make meta-optimization more difficult. | 1. What is the focus and contribution of the paper regarding transformers and meta-learning?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its potential usefulness and lack of motivation?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. What questions do you have about the experimental setup and results, such as the nature of the transformations used and their impact on the findings?
5. How does the reviewer view the relationship between this work and other relevant research, such as TabPFN? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors propose to use transformers to meta-learn a general-purpose learning algorithm. They train a transformer on many related tasks such that it is able to make a prediction for x given an entire dataset and x as an input to the transformer. They analyze when under which circumstances the transformer memorizes the data seen during meta-training and when it generalizes and note that many tasks and large models are a requirement for generalization.
Strengths And Weaknesses
Strengths
Method is well-described
Interesting problem
Weaknesses
No motivation provided (explain why no bias is useful and provide an example where this actually led to something more useful)
Experimental setup at times unclear (how do you create the different tasks exactly?)
No improvements over state-of-the-art (explain how the proposed method could be interesting nevertheless)
The authors tackle the ambitious problem of creating a black-box meta-learning algorithm. However, they provide neither theoretical, empirical or anecdotal evidence that this might be of any use. Therefore, it appears that this is a challenging problem but might be of no relevance.
I have problems interpreting the results since it is nowhere described what a random transformation is. Are these rotations, shifts, crops, everything together? Maybe the effects observed are related to the fact that given enough tasks, we just have seen every possible transformation? So maybe we keep seeing memorization where test tasks are simply very close to another train test. There is no discussion on how task similarity between train and test tasks changes with growing number of tasks and whether that might have an impact on the results.
How does the work relate to TabPFN: https://arxiv.org/pdf/2207.01848.pdf
Clarity, Quality, Novelty And Reproducibility
The methodology is well-described but the work lacks a clear motivation. The problem setting is novel but not justified. Reproducibility is not guaranteed since key aspects of the experiments are not described, e.g., how the different tasks for each dataset are generated. |
ICLR | Title
Coupled Oscillatory Recurrent Neural Network (coRNN): An accurate and (gradient) stable architecture for learning long time dependencies
Abstract
Circuits of biological neurons, such as in the functional parts of the brain can be modeled as networks of coupled oscillators. Inspired by the ability of these systems to express a rich set of outputs while keeping (gradients of) state variables bounded, we propose a novel architecture for recurrent neural networks. Our proposed RNN is based on a time-discretization of a system of second-order ordinary differential equations, modeling networks of controlled nonlinear oscillators. We prove precise bounds on the gradients of the hidden states, leading to the mitigation of the exploding and vanishing gradient problem for this RNN. Experiments show that the proposed RNN is comparable in performance to the state of the art on a variety of benchmarks, demonstrating the potential of this architecture to provide stable and accurate RNNs for processing complex sequential data.
1 INTRODUCTION
Recurrent neural networks (RNNs) have achieved tremendous success in a variety of tasks involving sequential (time series) inputs and outputs, ranging from speech recognition to computer vision and natural language processing, among others. However, it is well known that training RNNs to process inputs over long time scales (input sequences) is notoriously hard on account of the so-called exploding and vanishing gradient problem (EVGP) (Pascanu et al., 2013), which stems from the fact that the well-established BPTT algorithm for training RNNs requires computing products of gradients (Jacobians) of the underlying hidden states over very long time scales. Consequently, the overall gradient can grow (to infinity) or decay (to zero) exponentially fast with respect to the number of recurrent interactions.
A variety of approaches have been suggested to mitigate the exploding and vanishing gradient problem. These include adding gating mechanisms to the RNN in order to control the flow of information in the network, leading to architectures such as long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997) and gated recurring units (GRU) (Cho et al., 2014), that can overcome the vanishing gradient problem on account of the underlying additive structure. However, the gradients might still explode and learning very long term dependencies remains a challenge (Li et al., 2018). Another popular approach for handling the EVGP is to constrain the structure of underlying recurrent weight matrices by requiring them to be orthogonal (unitary), leading to the so-called orthogonal RNNs (Henaff et al., 2016; Arjovsky et al., 2016; Wisdom et al., 2016; Kerg et al., 2019) and references therein. By construction, the resulting Jacobians have eigen- and singular-spectra with unit norm, alleviating the EVGP. However as pointed out by Kerg et al. (2019), imposing such constraints on the recurrent matrices may lead to a significant loss of expressivity of the RNN resulting in inadequate performance on realistic tasks.
In this article, we adopt a different approach, based on observation that coupled networks of controlled non-linear forced and damped oscillators, that arise in many physical, engineering and biological
systems, such as networks of biological neurons, do seem to ensure expressive representations while constraining the dynamics of state variables and their gradients. This motivates us to propose a novel architecture for RNNs, based on time-discretizations of second-order systems of non-linear ordinary differential equations (ODEs) (1) that model coupled oscillators. Under verifiable hypotheses, we are able to rigorously prove precise bounds on the hidden states of these RNNs and their gradients, enabling a possible solution of the exploding and vanishing gradient problem, while demonstrating through benchmark numerical experiments, that the resulting system still retains sufficient expressivity, i.e. ability to process complex inputs, with a competitive performance, with respect to the state of the art, on a variety of sequential learning tasks.
2 THE PROPOSED RNN
Our proposed RNN is based on the following second-order system of ODEs,
y′′ = σ (Wy + Wy′ + Vu + b)− γy − y′. (1) Here, t ∈ [0, 1] is the (continuous) time variable, u = u(t) ∈ Rd is the time-dependent input signal, y = y(t) ∈ Rm is the hidden state of the RNN with W,W ∈ Rm×m, V ∈ Rm×d are weight matrices, b ∈ Rm is the bias vector and 0 < γ, are parameters, representing oscillation frequency and the amount of damping (friction) in the system, respectively. σ : R 7→ R is the activation function, set to σ(u) = tanh(u) here. By introducing the so-called velocity variable z = y′(t) ∈ Rm, we rewrite (1) as the first-order system:
y′ = z, z′ = σ (Wy + Wz + Vu + b)− γy − z. (2) We fix a timestep 0 < ∆t < 1 and define our proposed RNN hidden states at time tn = n∆t ∈ [0, 1] (while omitting the affine output state) as the following IMEX (implicit-explicit) discretization of the first order system (2):
yn = yn−1 + ∆tzn, zn = zn−1 + ∆tσ (Wyn−1 + Wzn−1 + Vun + b)−∆tγyn−1 −∆t zn̄, (3)
with either n̄ = n or n̄ = n− 1. Note that the only difference in the two versions of the RNN (3) lies in the implicit (n̄ = n) or explicit (n̄ = n− 1) treatment of the damping term − z in (2), whereas both versions retain the implicit treatment of the first equation in (2).
Motivation and background. To see that the underlying ODE (2) models a coupled network of controlled forced and damped nonlinear oscillators, we start with the single neuron (scalar) case by setting d = m = 1 in (1) and assume an identity activation function σ(x) = x. Setting W = W = V = b = = 0 leads to the simple ODE, y′′ + γy = 0, which exactly models simple harmonic motion with frequency γ, for instance that of a mass attached to a spring (Guckenheimer & Holmes, 1990). Letting > 0 in (1) adds damping or friction to the system (Guckenheimer & Holmes, 1990). Then, by introducing non-zero V in (1), we drive the system with a driving force proportional to the input signal u(t). The parameters V,b modulate the effect of the driving force, W controls the frequency of oscillations and W the amount of damping in the system. Finally, the tanh activation mediates a non-linear response in the oscillator. In the coupled network (2) with m > 1, each neuron updates its hidden state based on the input signal as well as information from other neurons. The diagonal entries of W (and the scalar hyperparameter γ) control the frequency whereas the diagonal entries of W (and the hyperparameter ) determine the amount of damping for each neuron, respectively, whereas the non-diagonal entries of these matrices modulate interactions between neurons. Hence, given this behavior of the underlying ODE (2), we term the RNN (3) as a coupled oscillatory Recurrent Neural Network (coRNN).
The dynamics of the ODE (2) (and the RNN (3)) for a single neuron are relatively straightforward. As we illustrate in Fig. 6 of supplementary material SM§C, input signals drive the generation of (superpositions of) oscillatory wave-forms, whose amplitude and (multiple) frequencies are controlled by the tunable parameters W,W,V,b. Adding a tanh activation does not change these dynamics much. This is in contrast to truncating tanh to leading non-linear order by setting σ(x) = x− x3/3, which yields a Duffing type oscillator that is characterized by chaotic behavior (Guckenheimer & Holmes, 1990). Adding interactions between neurons leads to further accentuation of this generation of superposed wave forms (see Fig. 6 in SM§C) and even with very simple network topologies, one
sees the emergence of non-trivial non-oscillatory hidden states from oscillatory inputs. In practice, a network of a large number of neurons is used and can lead to extremely rich global dynamics. Hence, we argue that the ability of a network of (forced, driven) oscillators to access a very rich set of output states may lead to high expressivity of the system, allowing it to approximate outputs from complicated sequential inputs.
Oscillator networks are ubiquitous in nature and in engineering systems (Guckenheimer & Holmes, 1990; Strogatz, 2015) with canonical examples being pendulums (classical mechanics), business cycles (economics), heartbeat (biology) for single oscillators and electrical circuits for networks of oscillators. Our motivating examples arise in neurobiology, where individual biological neurons can be viewed as oscillators with periodic spiking and firing of the action potential. Moreover, functional circuits of the brain, such as cortical columns and prefrontal-striatal-hippocampal circuits, are being increasingly interpreted by networks of oscillatory neurons, see Stiefel & Ermentrout (2016) for an overview. Following well-established paths in machine learning, such as for convolutional neural networks (LeCun et al., 2015), our focus here is to abstract the essence of functional brain circuits being networks of oscillators and design an RNN based on much simpler mechanistic systems, such as those modeled by (2), while ignoring the complicated biological details of neural function.
Related work. There is an increasing trend of basing RNN architectures on ODEs and dynamical systems. These approaches can roughly be classified into two branches, namely RNNs based on discretized ODEs and continuous-time RNNs. Examples of continuous-time approaches include neural ODEs (Chen et al., 2018) with ODE-RNNs (Rubanova et al., 2019) as its recurrent extension as well as E (2017) and references therein, to name just a few. We focus, however, in this article on an ODE-inspired discrete-time RNN, as the proposed coRNN is derived from a discretization of the ODE (1). A good example for a discrete-time ODE-based RNNs is the so-called anti-symmetric RNN of Chang et al. (2019), where the RNN architecture is based on a stable ODE resulting from a skew-symmetric hidden weight matrix, thus constraining the stable (gradient) dynamics of the network. This approach has much in common with previously mentioned unitary/orthogonal/nonnormal RNNs in constraining the structure of the hidden-to-hidden layer weight matrices. However, adding such strong constraints might reduce expressivity of the resulting RNN and might lead to inadequate performance on complex tasks. In contrast to these approaches, our proposed coRNN does not explicitly constrain the weight matrices but relies on the dynamics of the underlying ODE (and the IMEX discretization (3)), to provide gradient stability. Moreover, no gating mechanisms as in LSTMs/GRUs are used in the current version of coRNN. There is also an increasing interest in designing hybrid methods, which use a discretization of an ODE (in particular a Hamiltonian system) in order to learn the continuous representation of the data, see for instance Greydanus et al. (2019); Chen et al. (2020). Overall, our approach here differs from these papers in our use of networks of oscillators to build the RNN.
3 RIGOROUS ANALYSIS OF THE PROPOSED RNN
An attractive feature of the underlying ODE system (2) lies in the fact that the resulting hidden states (and their gradients) are bounded (see SM§D for precise statements and proofs). Hence, one can expect that a suitable discretization of the ODE (2) that preserves these bounds will not have exploding gradients. We claim that one such structure preserving discretization is given by the IMEX discretization that results in the RNN (3) and proceed to derive bounds on this RNN below.
Following standard practice we set y(0) = z(0) = 0 and purely for the simplicity of exposition, we set the control parameters, = γ = 1 and n̄ = n in (3) leading to,
yn = yn−1 + ∆tzn, zn = zn−1 1+∆t + ∆t 1+∆tσ(An−1)− ∆t1+∆tyn−1, An−1 := Wyn−1 + Wzn−1 + Vun + b. (4)
Analogous results and proofs for the case where n̄ = n− 1 and for general values of , γ are provided in SM§F.
Bounds on the hidden states. As with the underlying ODE (2), the hidden states of the RNN (3) are bounded, i.e.
Proposition 3.1 Let yn, zn be the hidden states of the RNN (4) for 1 ≤ n ≤ N , then the hidden states satisfy the following (energy) bounds:
y>n yn + z > n zn ≤ nm∆t = mtn ≤ m. (5)
The proof of the energy bound (5) is provided in SM§E.1 and a straightforward variant of the proof (see SM§E.2) yields an estimate on the sensitivity of the hidden states to changing inputs. As with the underlying ODE (see SM§D) , this bound rules out chaotic behavior of hidden states.
Bounds on hidden state gradients. We train the RNN (3) to minimize the loss function,
E := 1
N N∑ n=1 En, En = 1 2 ‖yn − ȳn‖22, (6)
with ȳ being the underlying ground truth (training data). During training, we compute gradients of the loss function (6) with respect to the weights and biases Θ = [W,W,V,b], i.e.
∂E ∂θ = 1 N N∑ n=1 ∂En ∂θ , ∀ θ ∈ Θ. (7)
Proposition 3.2 Let yn, zn be the hidden states generated by the RNN (4). We assume that the time step ∆t << 1 can be chosen such that,
max
{ ∆t(1 + ‖W‖∞)
1 + ∆t , ∆t‖W‖∞ 1 + ∆t
} = η ≤ ∆tr, 1
2 ≤ r ≤ 1. (8)
Denoting δ = 11+∆t , the gradient of the loss function E (6) with respect to any parameter θ ∈ Θ is bounded as, ∣∣∣∣∂E∂θ ∣∣∣∣ ≤ 32 (m+ Ȳ√m) , (9) with Ȳ = max
1≤n≤N ‖ȳn‖∞ be a bound on the underlying training data.
Sketch of the proof. Denoting Xn = [yn, zn], we can apply the chain rule repeatedly (for instance as in Pascanu et al. (2013)) to obtain,
∂En ∂θ
= ∑
1≤k≤n
∂En ∂Xn ∂Xn ∂Xk ∂+Xk ∂θ︸ ︷︷ ︸
∂E (k) n ∂θ
. (10)
Here, the notation ∂ +Xk ∂θ refers to taking the partial derivative of Xk with respect to the parameter θ, while keeping the other arguments constant. This quantity can be readily calculated from the structure of the RNN (4) and is presented in the detailed proof provided in SM§E.3. From (6), we can directly compute that ∂En∂Xn = [yn − ȳn, 0] . Repeated application of the chain rule and a direct calculation with (4) yields,
∂Xn ∂Xk
= ∏
k<i≤n
∂Xi ∂Xi−1 , ∂Xi ∂Xi−1 =
[ I + ∆tBi−1 ∆tCi−1
Bi−1 Ci−1
] , (11)
where I is the identity matrix and
Bi−1 = δ∆t (diag(σ ′(Ai−1))W − I) , Ci−1 = δ (I + ∆tdiag(σ′(Ai−1))W) . (12)
It is straightforward to calculate using the assumption (8) that ‖Bi−1‖∞ < η and ‖Ci−1‖∞ ≤ η+ δ. Using the definitions of matrix norms and (8), we obtain:∥∥∥∥ ∂Xi∂Xi−1 ∥∥∥∥ ∞ ≤ max (1 + ∆t(‖Bi−1‖∞ + ‖Ci−1‖∞), ‖Bi−1‖∞ + ‖Ci−1‖∞)
≤ max (1 + ∆t(δ + 2η), δ + 2η) ≤ 1 + 3∆tr. (13)
Therefore, using (11), we have∥∥∥∥∂Xn∂Xk ∥∥∥∥ ∞ ≤ ∏ k<i≤n ∥∥∥∥ ∂Xi∂Xi−1 ∥∥∥∥ ∞ ≤ (1 + 3∆tr)n−k ≈ 1 + 3(n− k)∆tr. (14)
Note that we have used an expansion around 1 and neglected terms of O(∆t2r) as ∆t << 1. We remark that the bound (13) is the crux of our argument about gradient control as we see from the structure of the RNN that the recurrent matrices have close to unit norm. The detailed proof is presented in SM§E.3. As the entire gradient of the loss function (6), with respect to the weights and biases of the network, is bounded above in (9), the exploding gradient problem is mitigated for this RNN.
On the vanishing gradient problem. The vanishing gradient problem (Pascanu et al., 2013) arises if ∣∣∣∂E(k)n∂θ ∣∣∣, defined in (10),→ 0 exponentially fast in k, for k << n (long-term dependencies). In that case, the RNN does not have long-term memory, as the contribution of the k-th hidden state to error at time step tn is infinitesimally small. We already see from (14) that
∥∥∥∂Xn∂Xk ∥∥∥∞ ≈ 1 (independently of k). Thus, we should not expect the products in (10) to decay fast. In fact, we will provide a much more precise characterization of this gradient. To this end, we introduce the following order-notation,
β = O(α), for α, β ∈ R+ if there exists constants C,C such that Cα ≤ β ≤ Cα. M = O(α), for M ∈ Rd1×d2 , α ∈ R+ if there exists constant C such that ‖M‖ ≤ Cα. (15)
For simplicity of notation, we will also set ȳn = un ≡ 0, for all n, b = 0 and r = 1 in (8) and we will only consider θ = Wi,j for some 1 ≤ i, j ≤ m in the following proposition.
Proposition 3.3 Let yn be the hidden states generated by the RNN (4). Under the assumption that yin = O( √ tn), for all 1 ≤ i ≤ m and (8), the gradient for long-term dependencies satisfies,
∂E (k) n
∂θ = O
( ĉδ∆t 3 2 ) +O ( ĉδ(1 + δ)∆t 5 2 ) +O(∆t3), ĉ = sech2 (√ k∆t(1 + ∆t) ) , k << n.
(16)
This precise bound (16) on the gradient shows that although the gradient can be small, i.e O(∆t 32 ), it is in fact independent of k, ensuring that long-term dependencies contribute to gradients at much later steps and mitigating the vanishing gradient problem. The detailed proof is presented in SM§E.5.
Summarizing, we see that the RNN (3) indeed satisfied similar bounds to the underlying ODE (2) that resulted in upper bounds on the hidden states and its gradients. However, the lower bound on the gradient (16) is due to the specific choice of this discretization and does not appear to have a continuous analogue, making the specific choice of discretization of (2) crucial for mitigating the vanishing gradient problem.
4 EXPERIMENTS
We present results on a variety of learning tasks with coRNN (3) with n̄ = n − 1, as this version resulted in marginally better performance than the version with n̄ = n. Details of the training procedure for each experiment can be found in SM§B. We wish to clarify here that we use a straightforward hyperparameter tuning protocol based on a validation set and do not use additional performance enhancing tools, such as dropout (Srivastava et al., 2014), gradient clipping (Pascanu et al., 2013) or batch normalization (Ioffe & Szegedy, 2015), which might further improve the performance of coRNNs.
Adding problem. We start with the well-known adding problem (Hochreiter & Schmidhuber, 1997), proposed to test the ability of an RNN to learn (very) long-term dependencies. The input is a two-dimensional sequence of length T , with the first dimension consisting of random numbers drawn from U([0, 1]) and with two non-zero entries (both set to 1) in the second dimension, chosen at random locations, but one each in both halves of the sequence. The output is the sum of two numbers
0 100 200 300 400 500
Training steps (hundreds)
0.000
0.025
0.050
0.075
0.100
0.125
0.150
0.175
0.200
M S
E
T = 500
0 100 200 300 400 500
Training steps (hundreds)
0.000
0.025
0.050
0.075
0.100
0.125
0.150
0.175
0.200
M S
E
T = 2000
coRNN expRNN FastRNN anti.sym. RNN tanh RNN Baseline
0 100 200 300 400 500
Training steps (hundreds)
0.000
0.025
0.050
0.075
0.100
0.125
0.150
0.175
0.200
M S
E
T = 5000
Figure 1: Results of the adding problem for coRNN, expRNN, FastRNN, anti.sym. RNN and tanh RNN based on three different sequence lengths T , i.e. T = 500, T = 2000 and T = 5000.
of the first dimension at positions, corresponding to the two 1 entries in the second dimension. We compare the proposed coRNN to three recently proposed RNNs, which were explicitly designed to learn LTDs, namely the FastRNN (Kusupati et al., 2018), the antisymmetric (anti.sym.) RNN (Chang et al., 2019) and the expRNN (Lezcano-Casado & Martínez-Rubio, 2019), and to a plain vanilla tanh RNN, with the goal of beating the baseline mean square error (MSE) of 0.167 (which stems from the variance of the baseline output 1). All methods have 128 hidden units (dimensionality of the hidden state y) and the same training protocol is used in all cases. Fig. 1 shows the results for different lengths T of the input sequences. We can see that while the tanh RNN is not able to beat the baseline for any sequence length, the other methods successfully learn the adding task for T = 500. However, in this case, coRNN converges significantly faster and reaches a lower test MSE than other tested methods. When setting the length to the much more challenging case of T = 2000, we see that only coRNN and the expRNN beat the baseline. However, the expRNN fails to reach a desired test MSE of 0.01 within training time. In order to further demonstrate the superiority of coRNN over recently proposed RNN architectures for learning LTDs, we consider the adding problem for T = 5000 and observe that coRNN converges very quickly even in this case, while expRNN fails to consistently beat the baseline. We thus conclude that the coRNN mitigates the vanishing/exploding gradient problem even for very long sequences.
Sequential (permuted) MNIST. Sequential MNIST (sMNIST) (Le et al., 2015) is a benchmark for RNNs, in which the model is required to classify an MNIST (LeCun et al., 1998) digit one pixel at a time leading to a classification task with a sequence length of T = 784. In permuted sequential MNIST (psMNIST), a fixed random permutation is applied in order to increase the time-delay between interdependent pixels and to make the problem harder. In Table 1, we compare the test accuracy for coRNN on sMNIST and psMNIST with recently published best case results for other recurrent models, which were explicitly designed to solve long-term dependencies together with baselines corresponding to gated and unitary RNNs. To the best of our knowledge the proposed coRNN outperforms all single-layer recurrent architectures, published in the literature, for both the sMNIST and psMNIST. Moreover in Fig. 2, we present the performance (with respect to number of epochs) of different RNN architectures for psMNIST with the same fixed random permutation and the
same number of hidden units, i.e. 128. As seen from this figure, coRNN clearly outperforms the other architectures, some of which were explicitly designed to learn LTDs, handily for this permutation.
Noise padded CIFAR-10. Another challenging test problem for learning LTDs is the recently proposed noise padded CIFAR-10 experiment by Chang et al. (2019), in which CIFAR-10 data points (Krizhevsky et al., 2009) are fed to the RNN row-wise and flattened along the channels resulting in sequences of length 32. To test the long term memory, entries of uniform random numbers are added such that the resulting sequences have a length of 1000, i.e. the last 968 entries of each sequence are only noise to distract the network. Table 2 shows the result for coRNN together with other recently published best case results. We observe that coRNN readily outperforms other RNN architectures on this benchmark, while requiring only 128 hidden units.
Human activity recognition. This experiment is based on the human activity recognition data set provided by Anguita et al. (2012). The data set is a collection of tracked human activities, which were measured by an accelerometer and gyroscope on a Samsung Galaxy S3 smartphone. Six activities were binarized to obtain two merged classes {Sitting, Laying, Walking_Upstairs} and {Standing, Walking, Walking_Downstairs}, leading to the HAR-2 data set, which was first proposed in Kusupati et al. (2018). Table 3 shows the result for coRNN together with other very recently published best case results on the same data set. We can see that coRNN readily outperforms all other methods. We also ran this experiment on a tiny coRNN with very few parameters, i.e. only 1k. We can see that even in this case, the tiny coRNN beats all baselines. We thus conclude that coRNN can efficiently be used on resource-constrained IoT micro-controllers.
IMDB sentiment analysis. The IMDB data set (Maas et al., 2011) is a collection of 50k movie reviews, where 25k reviews are used for training (with 7.5k of these reviews used for validating) and 25k reviews are used for testing. The aim of this binary sentiment classification task is to decide whether a movie review is positive or negative. We follow the standard procedure by initializing the word embedding with pretrained 100d GloVe (Pennington et al., 2014) vectors and restrict the
dictionary to 25k words. Table 4 shows the results for coRNN and other recently published models, which are trained similarly and have the same number of hidden units, i.e. 128. We can see that coRNN compares favorable with gated baselines (which are known to perform very well on this task), while at the same time requiring significantly less parameters.
Further experimental results. To shed further light on the performance of coRNN, we consider the following issues. First, the theory suggested that coRNN mitigates the exploding/vanishing gradient problem as long as the assumptions (8) on the time step ∆t and weight matrices W,W hold. Clearly one can choose a suitable ∆t to enforce (8) before training, but do these assumptions remain valid during training? In SM§E.4, we argue, based on worst-case estimates, that the assumptions will remain valid for possibly a large number of training steps. More pertinently, we can verify experimentally that (8) holds during training. This is demonstrated in Fig. 3, where we show that (8) holds for all LTD tasks during training. Thus, the presented theory applies and one can expect control over hidden state gradients with coRNN. Next, we recall that the frequency parameter γ and damping parameter play a role for coRNNs (see SM§F for the theoretical dependence and Table 8 for best performing values of , γ for each numerical experiment within the range considered in Table 7). How sensitive is the performance of coRNN to the choice of these 2 parameters? To investigate this dependence, we focus on the noise padded CIFAR-10 experiment and show the results of an ablation study in Fig. 4, where the test accuracy for different coRNNs based on a two dimensional hyperparameter grid ( , γ) ∈ [0.8, 1.8]× [5.7, 17, 7] (i.e., sufficiently large intervals around the best performing values of , γ from Table 8) is plotted. We observe from the figure that although there are reductions in test accuracy for non-optimal values of ( , γ), there is no large variation and the performance is rather robust with respect to these hyperparameters. Finally, note that we follow standard practice and present best reported results with coRNN as well as other competing RNNs in order to compare the relative performance. However, it is natural to investigate the dependence of these best results on the random initial (before training) values of the weight matrices. To this end, in Table 5 of SM, we report the mean and standard deviation (over 10 retrainings) of the test accuracy with coRNN on various learning tasks and find that the mean value is comparable to the best reported value, with low standard deviations. This indicates further robustness of the performance of coRNNs.
5 DISCUSSION
Inspired by many models in physics, biology and engineering, we proposed a novel RNN architecture (3) based on a model (1) of a network of controlled forced and damped oscillators. For this RNN, we rigorously showed that under verifiable hypotheses on the time step and weight matrices, the hidden states are bounded (5) and obtained precise bounds on the gradients (Jacobians) of the hidden states, (9) and (16). Thus by design, this architecture can mitigate the exploding and vanishing gradient problem (EVGP) for RNNs. We present a series of numerical experiments that include sequential image classification, activity recognition and sentiment analysis, to demonstrate that the proposed coRNN keeps hidden states and their gradients under control, while retaining sufficient expressivity to perform complex tasks. Thus, we provide a novel and promising strategy for designing RNN architectures that are motivated by the functioning of natural systems, have rigorous bounds on hidden state gradients and are robust, accurate, straightforward to train and cheap to evaluate.
This work can be extended in different directions. For instance in this article, we have mainly focused on the learning of tasks with long-term dependencies and observed that coRNNs are comparable in performance to the best published results in the literature. Given that coRNNs are built with networks of oscillators, it is natural to expect that they will perform very well on tasks with oscillatory inputs/outputs, such as the time series analysis of high-resolution biomedical data, for instance EEG (electroencephalography) and EMG (electromyography) data and seismic activity data from geoscience. This will be pursued in a follow-up article. Similarly, applications of coRNN to language modeling will be covered in future work.
However, it is essential to point out that coRNNs might not be suitable for every learning task involving sequential inputs/outputs. As a concrete example, we consider the problem of predicting time series corresponding to a chaotic dynamical system. We recall that by construction, the underlying ODE (2) (and the discretization (3)) do not allow for super-linear (in time) separation of trajectories for nearby inputs. Thus, we cannot expect that coRNNs will be effective at predicting chaotic time series and it is indeed investigated and demonstrated for a Lorenz-96 ODE in SM§A, where we observe that the coRNN is outperformed by LSTMs in the chaotic regime.
Our main theoretical focus in this paper was to demonstrate the possible mitigation of the exploding and vanishing gradient problem. On the other hand, we only provided some heuristics and numerical evidence on why the proposed RNN still has sufficient expressivity. A priori, it is natural to think that the proposed RNN architecture might introduce a strong bias towards oscillatory functions. However, as we argue in SM§C, the proposed coRNN can be significantly more expressive, as the damping, forcing and coupling of several oscillators modulates nonlinear response to yield a very rich and diverse set of output states. This is also evidenced by the ability of coRNNs to deal with many tasks in our numerical experiments, which do not have an explicit oscillatory structure. This sets the stage for a rigorous investigation of universality of the proposed coRNN architecture, as in the case of echo state networks in Grigoryeva & Ortega (2018). A possible approach would be to leverage the ability of the proposed RNN to convert general inputs into a rich set of superpositions of harmonics (oscillatory wave forms). Moreover, the proposed RNN was based on the simplest model of coupled oscillators (1). Much more detailed models of oscillators are available, particularly those that arise in the modeling of biological neurons, Stiefel & Ermentrout (2016) and references therein. An interesting variant of our proposed RNN would be to base the RNN architecture on these more elaborate models, resulting in analogues of the spiking neurons model of Maass (2001) for RNNs.
Supplementary Material for:
Coupled Oscillatory Recurrent Neural Network (coRNN): An accurate and (gradient) stable
architecture for learning long time dependencies
A CHAOTIC TIME-SERIES PREDICTION.
According to proposition E.1, coRNN does not exhibit chaotic behavior by design. While this property is highly desirable for learning long-term dependencies (a slight perturbation of the input should not result in an unbounded perturbation of the prediction), it impairs the performance on tasks, where the network has to learn actual chaotic dynamics. To test this numerically, we consider the following version of the Lorenz 96 system: (Lorenz, 1996):
x′j = (xi+1 − xi−2)xi−1 − xi + F, (17)
where xj ∈ R for all j = 1, . . . , 5 and F is an external force controlling the level of chaos in the system. Fig. 5 shows a trajectory of the system (17) plotted on the x1x2-plane for a small external
force of F = 0.9 as well as a trajectory for a large external force of F = 8. We can see that while for F = 0.9 the system does not exhibit chaotic behavior, the dynamics for F = 8 is already highly chaotic.
Our task consists of predicting the 25-th next state of a trajectory of the system (17). We provide 128 trajectories of length 2000 for each of the training, validation and test sets. The trajectories are generated by numerically solving the system (17) and evaluating it at 2000 equidistantly distributed discrete time points with distance 0.01. The initial value for each trajectory is chosen uniform at random on [F − 1/2, F + 1/2]5 around the equilibrium point (F, . . . , F ) of the system (17). Since LSTMs are known to be able to produce chaotic dynamics, even in the autonomous (zero-entry) case (Laurent & von Brecht, 2017), we expect them to perform significantly better than coRNN if the underlying system exhibits strong chaotic behavior. Table 6 shows the normalized root mean square error (NRMSE) (RMSE divided by the root mean square of the target trajectory) on the test set for coRNN and LSTM. We can see that indeed for the non-chaotic case of using an external force of F = 0.9 LSTM and coRNN perform similarly. However, when the dynamics get chaotic (in this case using an external force of F = 8), the LSTM clearly outperforms coRNN.
B TRAINING DETAILS
The IMDB task was conducted on an NVIDIA GeForce GTX 1080 Ti GPU, while all other experiments were run on a Intel Xeon E3-1585Lv5 CPU. The weights and biases of coRNN are randomly initialized according to U(− 1√nin , 1√ nin
), where nin denotes the input dimension of each affine transformation. Instead of treating the parameters ∆t, γ and as fixed hyperparameters, we can also treat them as trainable network parameters by constraining ∆t to [0, 1] by using a sigmoidal activation function and , γ > 0 by the use of ReLU for instance. However, in this case no major difference in performance is obtained. The hyperparameters are optimized with a random search algorithm, where the results of the best performing coRNN (based on the validation set) are reported. The ranges of the hyperparameters for the random search algorithm are provided in Table 7. Table 8 shows the rounded hyperparameters of the best performing coRNN architecture resulting from the random search algorithm for each learning task. We used 100 training epochs for sMNIST, psMNIST and noise padded CIFAR-10 with additional 20 epochs in which the learning rate was reduced by a factor of 10. Additionally, we used 100 epochs for the IMDB task and 250 epochs for the HAR-2 task.
C HEURISTICS OF NETWORK FUNCTION
At the level of a single neuron, the dynamics of the RNN is relatively straightforward. We start with the scalar case, i.e. m = d = 1 and illustrate different hidden states y as a function of time, for different input signals, in Fig. 6. In this figure, we consider two different input signals, one oscillatory signal given by u(t) = cos(4t) and another is a combination of step functions. First, we plot the solution y(t) of (1), with the parameters V,b,W,W, = 0 and γ = 1. This simply corresponds to the case of a simple harmonic oscillator (SHO) and the solution is described by a sine wave with the natural frequency of the oscillator. Next, we introduce forcing by the input signal by setting V = 1 and the activation function is the identity σ(x) = x, leading to a forced damped oscillator (FDO). As seen from Fig. 6, in the case of an oscillatory signal, this leads to a very minor change over the SHO,
whereas for the step function, the change is only in the amplitude of the wave. Next, we add damping by setting = 0.25 and see that the resulting forced damped oscillator (FDO), merely damps the amplitude of the waves, without changing their frequency. Then, we consider the case of controlled oscillator (CFDO) by setting W = −2,V = 2,b = 0.25,W = 0.75. As seen from Fig. 6, this leads to a significant change in the wave form in both cases. For the oscillatory input, the output is now a superposition of many different forms, with different amplitudes and frequencies (phases) whereas for the step function input, the phase is shifted. Already, we can see that for a linear controlled oscillator, the output can be very complicated with the superposition of different waves. This holds true when the activation function is set to σ(x) = tanh(x) (which is our proposed coRNN). For both inputs, the output is a modulated version of the one generated by CFDO, expressed as a superposition of waves. On the other hand, we also plot the solution with a Duffing type oscillator (DUFF) by setting the activation function as,
σ(x) = x− x 3
3 . (18)
In this case, the solution is very different from the CFDO and coRNN solutions and is heavily damped (either in the output or its derivative). On the other hand, given the chaotic nature of the dynamical system in this case, a slight change in the parameters led to the output blowing up. Thus, a bounded nonlinearity seems essential in this context.
Coupling neurons together further accentuates this generation of superpositions of different waveforms, as seen even with the simplest case of a network with two neurons, shown in Fig. 6 (Bottom row). For this figure, we consider two neurons, i.e m = 2 and two different network topologies. For the first, we only allow the first neuron to influence the second one and not vice versa. This is enforced with the weight matrices,
W = [ −2 0 3 −2 ] , W = [ 0.75 0 −1 0.75 ] .
We also set V = [2, 2]>,b = [0.25, 0.25]>. Note that in this case (we name as ORD (for ordered connections)), the output of the first neuron should be exactly the same as in the uncoupled (UC) case, whereas there is a distinct change in the output of the second neuron and we see that the first neuron has modulated a sharp change in the resulting output wave form. It is well illustrated by the emergence of an approximation to the step function (Bottom Right of Fig. 6), even though the input signal is oscillatory.
Next, we consider the case of fully connected (FC) neurons by setting the weight matrices as,
W = [ −2 1 3 −2 ] , W = [ 0.75 0.3 −1 0.75 ] .
The resulting outputs for the first neuron are now slightly different from the uncoupled case. On the the other hand, the approximation of step function output for the second neuron is further accentuated.
Even these simple examples illustrate the functioning of a network of controlled oscillators well. The input signal is converted into a superposition of waves with different frequencies and amplitudes, with these quantities being controlled by the weights and biases in (1). Thus, very complicated outputs can be generated by modulating the number, frequencies and amplitudes of the waves. In practice, a network of a large number of neurons is used and can lead to extremely rich global dynamics, along the lines of emergence of synchronization or bistable heterogeneous behavior seen in systems of idealized oscillators and explained by their mean field limit, see H. Sakaguchi & Kuramoto (1987); Winfree (1967); Strogatz (2001). Thus, we argue that the ability of the network of (forced, driven) oscillators to access a very rich set of output states can lead to high expressivity of the system. The training process selects the weights that modulate frequencies, phases and amplitudes of individual neurons and their interaction to guide the system to its target output.
D BOUNDS ON THE DYNAMICS OF THE ORDINARY DIFFERENTIAL EQUATION
(1)
In this section, we present bounds that show how the continuous time dynamics of the ordinary differential equation (2), modeling non-linear damped and forced networks of oscillators, is constrained. We start with the following estimate on the energy of the solutions of the system (2).
Proposition D.1 Let y(t), z(t) be the solutions of the ODE system (2) at any time t ∈ [0, T ] and assume that the damping parameter ≥ 12 and the initial data for (2) is given by, y(0) = z(0) ≡ 0. Then, the solutions are bounded as,
y(t)>y(t) ≤ mt γ , z(t)>z(t) ≤ mt, ∀t ∈ (0, T ]. (19)
To prove this proposition, we multiply the first equation in (2) with y(t)> and the second equation in (2) with 1γ z(t) > to obtain,
d
dt
( y(t)>y(t)
2 +
z(t)>z(t)
2γ
) = z(t)>σ(A(t))
γ − γ z(t)>z(t), (20)
with A(t) = Wy(t) + Wz(t) + Vu(t) + b.
Using the elementary Cauchy’s inequality repeatedly in (20) results in,
d
dt
( y(t)>y(t)
2 +
z(t)>z(t)
2γ
) ≤ σ(A) >σ(A)
2γ +
1
γ
( 1 2 − ) z>z
≤ m 2γ (as |σ| ≤ 1 and ≥ 1 2 ).
Integrating the above inequality over the time interval [0, t] and using the fact that the initial data are y(0) = z(0) ≡ 0, we obtain the bounds (19). The above proposition and estimate (19) clearly demonstrate that the dynamics of the network of coupled non-linear oscillators (1) is bounded. The fact that the nonlinear activation function σ = tanh is uniformly bounded in its arguments played a crucial role in deriving the energy bound (19). A straightforward adaptation of this argument leads to the following proposition about the sensitivity of the system to inputs,
Proposition D.2 Let y(t), z(t) be the solutions of the ODE system (2) with respect to the input signal u(t). Let ȳ(t), z̄(t) be the solutions of the ODE system (2), but with respect to the input signal ū(t). Assume that the damping parameter ≥ 12 and the initial data are given by,
y(0) = z(0) = ȳ(0) = z̄(0) ≡ 0. Then we have the following bound,
(y(t)− ȳ(t))> (y(t)− ȳ(t)) ≤ 4mt γ , (z(t)− z̄(t))> (z(t)− z̄(t)) ≤ 4mt, ∀t ∈ (0, T ].
(21)
Thus from the bound (21), there can be atmost linear separation (in time) with respect to the trajectories of the ODE (2) for different input signals. Hence, chaotic behavior, which is characterized by the (super-)exponential separation of trajectories is ruled out by the structure of the ODE system (2). Note that this property of the ODE system was primarily a result of the uniform boundedness of the activation function σ. Using a different activation function such as ReLU might enable to obtain an exponential separation of trajectories that is a prerequisite for a chaotic dynamical system.
D.1 GRADIENT DYNAMICS FOR THE ODE SYSTEM (2)
Let θ denote the i, j-th entry of the Weight matrices W,W,V or the i-th entry of the bias vector b. We are interested in finding out how the gradients of the hidden state y (and the auxiliary hidden state z) with respect to parameter θ, vary with time. Note that these gradients are precisely the objects of interest in the training of an RNN, based on a discretization of the ODE system (2). To this end, we differentiate (2) with respect to the parameter θ and denote
yθ(t) = ∂y
∂θ (t), zθ(t) =
∂z ∂θ (t),
to obtain, y′θ = zθ,
z′θ = diag(σ ′(A)) [Wyθ + Wzθ] + Z i,j m,m̄(A)ρ− γyθ − zθ.
(22)
As introduced before, Zi,jm,m̄(A) ∈ Rm×m̄ is a matrix with all elements are zero except for the (i, j)-th entry which is set to σ′(A(t))i, i.e. the i-th entry of σ′(A), and we have,
ρ = y, m̄ = m, if θ = Wi,j ,
ρ = z, m̄ = m, if θ = Wi,j ,
ρ = u, m̄ = d, if θ = Vi,j ,
ρ = 1, m̄ = 1, if θ = bi.
We see from (22) that the ODEs governing the gradients with respect to the parameter θ also represent a system of oscillators but with additional coupling and forcing terms, proportional to the hidden states y, z or input signal u. As we have already proved with estimate (19) that the hidden states are always bounded and the input signal is assumed to be bounded, it is natural to expect that the gradients of the states with respect to θ are also bounded. We make this statement explicit in the following proposition, which for simplicity of exposition, we consider the case of θ = Wi,j , as the other values of θ are very similar in their behavior.
Proposition D.3 Let θ = Wi,j and y, z be the solutions of the ODE system (2). Assume that the weights and the damping parameter satisfy,
‖W‖∞ + ‖W‖∞ ≤ , then we have the following bounds on the gradients,
yθ(t) >yθ(t) +
1
γ
( zθ(t) >zθ(t) ) ≤ [ yθ(0) >yθ(0) + 1
γ
( zθ(0) >zθ(0) )] eCt + mt2
2γ2 , t ∈ (0, T ],
C = max {‖W‖1 γ , 1 + ‖W‖1 } .
(23)
The proof of this proposition follows exactly along the same lines as the proof of proposition D.1 and we skip the details, while noting the crucial role played by the energy bound (19).
We remark that the bound (23) indicates that as long as the initial gradients with respect to θ are bounded and the weights are controlled by the damping parameter, the hidden state gradients remain bounded in time.
E SUPPLEMENT TO THE RIGOROUS ANALYSIS OF CORNN
In this section, we supplement the section on the rigorous analysis of the proposed RNN (4). We start with
E.1 PROOF OF PROPOSITION 3.1
We multiply (y>n−1, z > n ) to (3) and use the elementary identities,
a>(a− b) = a >a
2 − b
>b 2 + 1 2 (a− b)>(a− b), b>(a− b) = a >a 2 − b >b 2 − 1 2 (a− b)>(a− b),
to obtain the following,
y>n yn + z > n zn
2 =
y>n−1yn−1 + z > n−1zn−1
2 + (yn − yn−1)>(yn − yn−1) 2
− (zn − zn−1) >(zn − zn−1) 2 + ∆tz>n σ(An−1)−∆tz>n zn ≤ y > n−1yn−1 + z > n−1zn−1
2 + ∆t (1/2 + ∆t/2− 1) z>n zn +
∆t
2 σ>(An−1)σ(An−1)
≤ y > n−1yn−1 + z > n−1zn−1
2 + m∆t 2 as σ2 ≤ 1 and > ∆t << 1.
Iterating the above inequality n times leads to the energy bound,
y>n yn + z > n zn ≤ y>0 y0 + z>0 z0 + nm∆t = mtn, (24)
as y0 = z0 = 0.
E.2 SENSITIVITY TO INPUTS
Next, we examine how changes in the input signal u affect the dynamics. We have the following proposition:
Proposition E.1 Let yn, zn be the hidden states of the trained RNN (4) with respect to the input u = {un}Nn=1 and let yn, zn be the hidden states of the same RNN (4), but with respect to the input u = {un}Nn=1, then the differences in the hidden states are bounded by,
(yn − yn)> (yn − yn) + (zn − zn)> (zn − zn) ≤ 4mtn. (25)
The proof of this proposition is completely analogous to the proof of proposition 3.1, we subtract
yn = yn−1 + ∆tzn, zn = zn−1 1+∆t + ∆t 1+∆tσ(An−1)− ∆t1+∆tyn−1, An−1 := Wyn−1 + Wzn−1 + Vun + b.
(26) from (4) and multiply ( (yn − yn)> , (zn − zn)> ) to the difference. The estimate (25) follows
identically to the proof of (5) (presented above) by realizing that σ(An−1)− σ(An−1) ≤ 2. Note that the bound (25) ensures that the hidden states can only separate linearly in time for changes in the input. Thus, chaotic behavior, such as for Duffing type oscillators, characterized by at least exponential separation of trajectories, is ruled out for this proposed RNN, showing that it is stable with respect to changes in the input. This is largely on account of the fact that the activation function σ in (3) is globally bounded.
E.3 PROOF OF PROPOSITION 3.2
From (6), we readily calculate that,
∂En ∂Xn = [yn − ȳn, 0] . (27)
Similarly from (3), we calculate,
∂+Xk ∂θ = [( ∆t2 1+∆tZ i,j m,m(Ak−1)yk−1 )> , ( ∆t 1+∆tZ i,j m,m(Ak−1)yk−1 )>]> if θ = (i, j)−th entry of W,[( ∆t2 1+∆tZ i,j m,m(Ak−1)zk−1 )> , ( ∆t 1+∆tZ i,j m,m(Ak−1)zk−1 )>]> if θ = (i, j)−th entry of W,[( ∆t2 1+∆tZ i,j m,d(Ak−1)uk )> , ( ∆t 1+∆tZ i,j m,d(Ak−1)uk )>]> if θ = (i, j)−th entry of V,[( ∆t2
1+∆tZ i,1 m,1(Ak−1)
)> , (
∆t 1+∆tZ i,1 m,1(Ak−1)
)>]> if θ = i−th entry of b,
(28) where Zi,jm,m̄(Ak−1) ∈ Rm×m̄ is a matrix with all elements are zero except for the (i, j)-th entry which is set to σ′(Ak−1)i, i.e. the i-th entry of σ′(Ak−1). We easily see that ‖Zi,jm,m̄(Ak−1)‖∞ ≤ 1 for all i, j,m, m̄ and all choices of Ak−1.
Now, using definitions of matrix and vector norms and applying (14) in (10), together with (27) and (28), we obtain the following estimate on the norm:
∣∣∣∣∣∂E(k)n∂θ ∣∣∣∣∣ ≤ (‖yn‖∞ + ‖ȳn‖∞)(1 + 3(n− k)∆tr)δ∆t‖yk−1‖∞, if θ is entry of W, (‖yn‖∞ + ‖ȳn‖∞)(1 + 3(n− k)∆tr)δ∆t‖zk−1‖∞, if θ is entry of W, (‖yn‖∞ + ‖ȳn‖∞)(1 + 3(n− k)∆tr)δ∆t‖uk‖∞, if θ is entry of V, (‖yn‖∞ + ‖ȳn‖∞)(1 + 3(n− k)∆tr)δ∆t, if θ is entry of b. (29)
We will estimate the above term, just for the case of θ is an entry of W, the rest of the terms are very similar to estimate.
For simplicity of notation, we let k − 1 ≈ k and aim to estimate the term,∣∣∣∣∣∂E(k)n∂θ ∣∣∣∣∣ ≤ ‖yn‖∞‖yk‖∞(1 + 3(n− k)∆tr)δ∆t+ ‖ȳn‖∞‖yk‖∞(1 + 3(n− k)∆tr)δ∆t ≤ m √ nk∆t(1 + 3(n− k)∆tr)δ∆t+ ‖ȳn‖∞ √ mk √
∆t(1 + 3(n− k)∆tr)δ∆t (by (5)) ≤ m √ nkδ∆t2 + 3m √ nk(n− k)δ∆tr+2 + ‖ȳn‖∞ √ mk √
∆t(1 + 3(n− k)∆tr)δ∆t. (30)
To further analyze the above estimate, we recall that n∆t = tn ≤ 1 and consider two different regimes. Let us start by considering short-term dependencies by letting k ≈ n, i.e n− k = c with constant c ∼ O(1), independent of n, k. In this case, a straightforward application of the above assumptions in the bound (30) yields,∣∣∣∣∣∂E(k)n∂θ
∣∣∣∣∣ ≤ m√nkδ∆t2 + 3m√nk(n− k)δ∆tr+2 + ‖ȳn‖∞√m√tnδ∆t+ ‖ȳn‖∞√m√tncδ∆tr+1 ≤ mtnδ∆t+mctnδ∆tr+1 + ‖ȳn‖∞ √ m √ tnδ∆t+ ‖ȳn‖∞ √ m √ tncδ∆t r+1
≤ tnmδ∆t+ ‖ȳn‖∞ √ m √ tnδ∆t (for ∆t << 1 as r ≥ 1/2) ≤ mδ∆t+ ‖ȳn‖∞ √ mδ∆t.
(31)
Next, we consider long-term dependencies by setting k << n and estimating,∣∣∣∣∣∂E(k)n∂θ ∣∣∣∣∣ ≤ m√nkδ∆t2 + 3m√nk(n− k)δ∆tr+2 + ‖ȳn‖∞√mδ∆t 32 + 3‖ȳn‖∞√mnδ∆tr+ 32 ≤ m √ tnδ∆t 3 2 + 3mt 3 2 n δ∆t r+ 12 + ‖ȳn‖∞ √ mδ∆t 3 2 + 3‖ȳn‖∞ √ mtnδ∆t r+ 12
≤ mδ∆t 32 + 3mδ∆tr+ 12 + ‖ȳn‖∞ √ mδ∆t 3 2 + 3‖ȳn‖∞ √ mδ∆tr+ 1 2 (as tn < 1) ≤ 3mδ∆tr+ 12 + 3‖ȳn‖∞ √ mδ∆tr+ 1 2 (as r ≤ 1 and ∆t << 1).
(32) Thus, in all cases, we have that,∣∣∣∣∣∂E(k)n∂θ ∣∣∣∣∣ ≤ 3δ∆t (m+√m‖ȳn‖∞) (as r ≥ 1/2). (33) Applying the above estimate in (10) allows us to bound the gradient by,∣∣∣∣∂En∂θ ∣∣∣∣ ≤ ∑ 1≤k≤n ∣∣∣∣∣∂E(k)n∂θ ∣∣∣∣∣ ≤ 3δtn (m+√m‖ȳn‖∞) . (34)
Therefore, the gradient of the loss function (6) can be bounded as,∣∣∣∣∂E∂θ ∣∣∣∣ ≤ 1N N∑ n=1 ∣∣∣∣∂En∂θ ∣∣∣∣
≤ 3δ [ m∆t
N N∑ n=1 n+ √ m∆t N N∑ n=1
‖ȳn‖∞n ]
≤ 3δ [ m∆t
N N∑ n=1 n+ √ mȲ∆t N N∑ n=1 n
]
≤ 3 2 δ(N + 1)∆t
( m+ Ȳ √ m )
≤ 3 2 δ(tN + ∆t)
( m+ Ȳ √ m )
≤ 3 2 δ(1 + ∆t)
( m+ Ȳ √ m )
(as tN = 1)
≤ 3 2
( m+ Ȳ √ m ) ,
(35)
which is the desired estimate (9).
E.4 ON THE ASSUMPTION (8) AND TRAINING
Note that all the estimates were based on the fact that we were able to choose a time step ∆t in (3) that enforces the condition (8). For any fixed weights W,W, we can indeed choose such a value of to satisfy (8). However, we train the RNN to find the weights that minimize the loss function (6). Can we find a hyperparameter ∆t such that (8) is satisfied at every step of the stochastic gradient descent method for training?
To investigate this issue, we consider a simple gradient descent method of the form:
θ`+1 = θ` − ζ ∂E
∂θ (θ`). (36)
Note that ζ is the constant (non-adapted) learning rate. We assume for simplicity that θ0 = 0 (other choices lead to the addition of a constant). Then, a straightforward estimate on the weight is given by,
|θ`+1| ≤ |θ`|+ ζ ∣∣∣∣∂E∂θ (θ`) ∣∣∣∣ ≤ |θ`|+ ζ 3
2
( m+ Ȳ √ m ) (by (35))
≤ |θ0|+ `ζ 3
2
( m+ Ȳ √ m ) = `ζ 3
2
( m+ Ȳ √ m ) .
(37)
In order to calculate the minimum number of steps L in the gradient descent method (36) such that the condition (8) is satisfied, we set ` = L in (37) and applying it to the condition (8) leads to the straightforward estimate,
L ≥ 1 ζ 32 ( m+ Ȳ √ m ) m∆t1−rδ . (38)
Note that the parameter δ < 1, while in general, the learning rate ζ << 1. Thus, as long as r ≤ 1, we see that the assumption (8) holds for a large number of steps of the gradient descent method. We remark that the above estimate (38) is a large underestimate on L. In the experiments presented in this article, we are able to take a very large number of training steps, while the gradients remain within a range (see Fig. 3).
E.5 PROOF OF PROPOSITION 3.3
We start with the following decomposition of the recurrent matrices:
∂Xi ∂Xi−1 = Mi−1 + ∆tM̃i−1,
Mi−1 :=
[ I ∆tCi−1
Bi−1 Ci−1
] , M̃i−1 := [ Bi−1 0
0 0
] ,
with B,C defined in (12). By the assumption (8), one can readily check that ‖M̃i−1‖∞ ≤ ∆t, for all k ≤ i ≤ n− 1. We will use an induction argument to show the following representation formula for the product of Jacobians,
∂Xn ∂Xk
= ∏
k<i≤n
∂Xi ∂Xi−1 = I ∆t n−1∑ j=k k∏ i=j Ci
Bn−1 + k∑
j=n−2 ( j+1∏ i=n−1 Ci ) Bj
k∏ i=n−1 Ci +O(∆t). (39) We start by the outermost product and calculate,
∂Xn ∂Xn−1 ∂Xn−1 ∂Xn−2
= ( Mn−1 + ∆tM̃n−1 )( Mn−2 + ∆tM̃n−2 ) = Mn−1Mn−2 + ∆t(M̃n−1Mn−2 +Mn−1M̃n−2) +O(∆t2).
By direct multiplication, we obtain,
Mn−1Mn−2 =
[ I ∆t (Cn−2 + Cn−1Cn−2)
Bn−1 + Cn−1Bn−2 Cn−1Cn−2 ] + ∆t [ Cn−1Bn−2 0
0 Bn−1Cn−2
] .
Using the definitions in (12) and (8), we can easily see that[ Cn−1Bn−2 0
0 Bn−1Cn−2
] = O(∆t).
Similarly, it is easy to show that
M̃n−1Mn−2,Mn−1M̃n−2 ∼ O(∆t).
Plugging all the above estimates yields,
∂Xn ∂Xn−1 ∂Xn−1 ∂Xn−2 =
[ I ∆t (Cn−2 + Cn−1Cn−2)
Bn−1 + Cn−1Bn−2 Cn−1Cn−2
] +O(∆t2),
which is exactly the form of the leading term (39).
Iterating the above calculations (n− k) times and realizing that (n− k)∆t2 ≈ n∆t2 = tn∆t yields the formula (39).
Recall that we have set θ = Wi,j , for some 1 ≤ i, j ≤ m in proposition 3.3. Directly calculating with (27), (28) and the representation formula (39) yields the formula,
∂E (k) n
∂θ = y>n∆t 2δZi,jm,m(Ak−1)yk−1 + y > n∆t 2δC∗Zi,jm,m(Ak−1)yk−1 +O(∆t3), (40)
with matrix C∗ defined as,
C∗ := n−1∑ j=k k∏ i=j Ci,
and Zi,jm,m(Ak−1) ∈ Rm×m is a matrix with all elements are zero except for the (i, j)-th entry which is set to σ′(aik−1), i.e. the i-th entry of σ ′(Ak−1).
Note that the formula (40) can be explicitly written as,
∂E (k) n
∂θ = δ∆t2σ′(aik−1)y i ny j k−1 + δ∆t 2σ′(aik−1) m∑ `=1 C∗`iy ` ny j k−1 +O(∆t3), (41)
with yjn denoting the j-th element of vector yn, and
aik−1 := m∑ `=1 Wi`y ` k−1 + m∑ `=1 Wi`z ` k−1. (42)
By the assumption (8), we can readily see that
‖W‖∞, ‖W‖∞ ≤ 1 + ∆t. Therefore by the fact that σ′ = sech2, the assumption yik = O( √ tk) and (42), we obtain,
ĉ = sech2( √ k∆t(1 + ∆t) ≤ σ′(ak−1i ) ≤ 1. (43)
Using (43) in (41), we obtain,
δ∆t2σ′(aik−1)y i ny j k−1 = O
( ĉδ∆t 5 2 ) . (44)
Using the definition of Ci, we can expand the product in C∗ and neglect terms of order O(∆t4), to obtain
k∏ i=j Ci = (O(1) +O((j − k + 1)δ∆t2))I.
Summing over j and using the fact that k << n, we obtain that
C∗ = (O(n) +O(δ∆t0))I. (45) Plugging (45) and (43) into (41) leads to,
δ∆t2σ′(aik−1) m∑ `=1 C∗`iy ` ny j k−1 = O ( ĉδ∆t 3 2 ) +O ( ĉδ2∆t 5 2 ) . (46)
Combining (44) and (46) yields the desired estimate (16).
Remark. A careful examination of the above proof reveals that the constants hidden in the prefactors of the leading term O ( ĉδ∆t 3 2 ) of (16) stem from the formula (46). Here, we have used the assumption that yik = O( √ tk). Note that this assumption implicitly assumes that the energy bound (5) is equidistributed among all the elements of the vector yk and results in the obfuscation of the constants in the leading term of (16). Given that the energy bound (5) is too coarse to allow for precise upper and lower bounds on each individual element of the hidden state vector yk, we do not see any other way of, in general, determining the distribution of energy among individual entries of the hidden state vector. Thus, assuming equidistribution seems reasonable. On the other hand, in practice, one has access to all the terms in formula (46) for each numerical experiment and if one is interested, then one can directly evaluate the precise bound on the leading term of the formula (16).
F RIGOROUS ESTIMATES FOR THE RNN (3) WITH n̄ = n− 1 AND GENERAL VALUES OF , γ
In this section, we will provide rigorous estimates, similar to that of propositions 3.1, E.1 and 3.2 for the version of coRNN (3) that results by setting n̄ = n− 1 in (3) leading to,
yn = yn−1 + ∆tzn, zn = zn−1 + ∆tσ (Wyn−1 + Wzn−1 + Vun + b)−∆tγyn−1 −∆t zn−1. (47)
Note that (47) can be equivalently written as, yn = yn−1 + ∆tzn,
zn = (1− ∆t) zn−1 + ∆tσ (Wyn−1 + Wzn−1 + Vun + b)−∆tγyn−1. (48)
We will also consider the case of non-unit values of the control parameters γ and below.
Bounds on Hidden states. We start the following bound on the hidden states of (47),
Proposition F.1 Let the damping parameter > 12 and the time step ∆t in the RNN (47) satisfy the following condition,
∆t < 2 − 1 γ + 2 . (49)
Let yn, zn be the hidden states of the RNN (47) for 1 ≤ n ≤ N , then the hidden states satisfy the following (energy) bounds:
y>n yn + 1
γ z>n zn ≤ mtn γ . (50)
We set An−1 = Wyn−1 +Wzn−1 + Vun−1 + b and as in the proof of proposition 3.1, we multiply (y>n−1, 1 γ z > n ) to (47) and use elementary identities and rearrange terms to obtain,
y>n yn 2 + z>n zn 2γ = y>n−1yn−1 2 + z>n−1zn−1 2γ + (yn − yn−1)>(yn − yn−1) 2
− (zn − zn−1) >(zn − zn−1) 2γ
+ ∆t
γ z>n σ(An−1)−
∆t γ z>n zn + ∆t γ z>n (zn − zn−1) .
We use a rescaled version of the well-known Cauchy’s inequality
ab ≤ ca 2 2 + b2 2c ,
for a constant c > 0 to be determined, to rewrite the above identity as, y>n yn
2 + z>n zn 2γ ≤ y > n−1yn−1 2 + z>n−1zn−1 2γ + (yn − yn−1)>(yn − yn−1) 2
+
( ∆t
2cγ − 1 2γ
) (zn − zn−1)>(zn − zn−1) + ∆t
2γ σ(An−1)
>σ(An−1)
+
( ∆t
2γ + c ∆t 2γ − ∆t γ
) z>n zn.
Using the first equation in (47), the above inequality reduces to,
y>n yn 2 + z>n zn 2γ ≤ y > n−1yn−1 2 + z>n−1zn−1 2γ
+
( ∆t
2cγ − 1 2γ
) (zn − zn−1)>(zn − zn−1) + ∆t
2γ σ(An−1)
>σ(An−1)
+
( ∆t2
2 +
∆t 2γ + c ∆t 2γ − ∆t γ
) z>n zn.
As long as,
∆t ≤ min ( c ,
(2− c) − 1 γ
) , (51)
we can easily check that,
y>n yn 2 + z>n zn 2γ ≤ y > n−1yn−1 2 + z>n−1zn−1 2γ + ∆t 2γ σ(An−1) >σ(An−1)
≤ y > n−1yn−1
2 +
z>n−1zn−1
2γ + m∆t 2γ (σ ≤ 1).
Iterating the above bound till n = 0 and using the zero initial data yields the desired (50) as long as we find a c such that the condition (51) is satisfied. To do so, we equalize the two terms on the right hand side of (51) to obtain,
c = (2 − 1) γ + 2 .
From the assumption (49) and the fact that > 12 , we see that such a c > 0 always exists for any value of γ > 0 and (51) is satisfied, which completes the proof.
We remark that the same bound on the hidden states is obtained for both versions of coRNN, i.e. (3) with n̄ = n and (47). However, the difference lies in the constraint on the time step ∆t. In contrast to (49), a careful examination of the proof of proposition 3.1 reveals that the condition on the time step for the stability of (3) with n̄ = n is given by,
∆t < 2 − 1 γ , (52)
and is clearly less stringent than the condition (51) for the stability of (47). For instance, in the prototypical case of γ = = 1, the stability of (3) with n̄ = n is ensured for any ∆t < 1. On the other hand, the stability of (47) is ensured as long as ∆t < 12 . However, it is essential to recall that these conditions are only sufficient to ensure stability and are by no means necessary. Thus in practice, the coRNN version (47) is found to be stable in the same range of time steps as the version (3) with n̄ = n.
On the exploding and vanishing gradient problems for coRNN (47) Next, we have the following upper bound on the hidden state gradients for the version (47) of coRNN,
Proposition F.2 Let yn, zn be the hidden states generated by the RNN (47). We assume that the damping parameter > 12 and the time step ∆t can be chosen such that in addition to (51) it also satisfies,
max {∆t(γ + ‖W‖∞),∆t‖W‖∞} = η ≤ C̃∆tr, 1
2 ≤ r ≤ 1, (53)
and with the constant C̃ independent of the other parameters of the RNN (47). Then the gradient of the loss function E (6) with respect to any parameter θ ∈ Θ is bounded as,∣∣∣∣∂E∂θ ∣∣∣∣ ≤ 3(C̃) ( m+ Ȳ √ m ) 2γ , (54)
with the constant C̃, defined in (53) and Ȳ = max 1≤n≤N ‖ȳn‖∞ be a bound on the underlying training data
The proof of this proposition is completely analogous to the proof of proposition 3.2 and we omit the details here.
Note that the bound (54) enforces that hidden state gradients cannot explode for version (47) of coRNN. A similar statement for the vanishing gradient problem is inferred from the proposition below.
Proposition F.3 Let yn be the hidden states generated by the RNN (47). Under the assumption that yin = O( √ tn γ ), for all 1 ≤ i ≤ m and (53), the gradient for long-term dependencies satisfies,
∂E (k) n
∂θ = O
( ĉ
γ ∆t
3 2 ) +O ( ĉ
γ δ(1 + δ)∆t
5 2 ) +O(∆t3), ĉ = sech2 (√ k∆t(1 + ∆t) ) k << n.
(55)
The proof is a repetition of the steps of the proof of proposition 3.3, with suitable modifications for the structure of the RNN and non-unit , γ and we omit the tedious calculations here. Note that (55) rules out the vanishing gradient problem for the coRNN version (47). | 1. What is the main contribution of the paper, and how does it address the problem of vanishing and exploding gradients in RNNs?
2. What are the strengths and weaknesses of the paper's mathematical analysis, particularly regarding the bounds on gradient norms?
3. How do the numerical results demonstrate the improved trainability and performance of CorNN, and what are the underlying reasons for its success?
4. What are the limitations of CorNN, and under what circumstances might it fail?
5. How might the insights gained from CorNN be applied to biological networks, and what further research is needed to explore this connection?
6. How could expressivity be quantified, and what metrics might be used to evaluate the performance of CorNN and other RNN architectures?
7. Are there any suggestions for improving the figures and plots in the paper, such as using logarithmic or semilogarithmic axes, or including additional tasks? | Review | Review
The paper proposes a novel RNN architecture (CorNN) to tackle the infamous problem of vanishing and exploding gradients in RNNs. The novel CorNN architecture is based on time-discretized forced coupled damped nonlinear oscillators. For the gradient norm of CorNN analytical lower and upper bounds are calculated implying that CorNN avoids vanishing and exploding gradients. This is accompanied by numerical results (including code) that demonstrate the improved trainability of CorNN in permuted sequential MNIST, and adding task and noise-padded CIFAR10 compared to some other RNN architectures (GRU, LSTM, antisymmetricRNN, IMDB sentiment analysis, and a human activity recognition task.
In summary, the paper proposes a useful and mathematically transparent way of tackling the challenge of training RNN on tasks with long-time dependencies.
Weak points of the paper:
In the mathematical analysis the paper claims to "rigorously prove precise bounds on [...] gradients, enabling the solution of the exploding and vanishing gradient problem". If I am not misunderstanding, the mathematical bounds are actually only shown for the initial gradient norm (plus some number of steps afterward in section C4 of the Supplementary Material).
A more in-depth comparison of numerical gradient norms and their respective upper/lower analytical bounds would be desirable, as the lower bound on the gradient in (16) is only given in as O(∆t^(3/2)) without any prefactors.
While the numerical results demonstrate improved trainability and performance on a number of tasks, the underlying reasons remain mostly unclear. Supplementary Material B gives some heuristics on how a superposition of forced coupled damped nonlinear oscillator can generate complex output, but it doesn't explain the avoidance of exploding and vanishing gradients and the superiority to other solutions (e.g. gated units like LSTM or GRU).
Missing: What are the limitations of CorNN, when would you expect them to fail?
The link to biological networks is not yet very convincing. While there exist without doubt many oscillations on different scales in the brain, it is not clear how the insights gained here could be applied to the brain.
Some smaller comments:
How could expressivity be quantified?
figure 1: Plotting MSE with a logarithmic or semilogarithmic axis would help to distinguish small errors.
figure 3: It would help to also plot this for other tasks.
figure 3 lines for other tasks would be helpful |
ICLR | Title
Coupled Oscillatory Recurrent Neural Network (coRNN): An accurate and (gradient) stable architecture for learning long time dependencies
Abstract
Circuits of biological neurons, such as in the functional parts of the brain can be modeled as networks of coupled oscillators. Inspired by the ability of these systems to express a rich set of outputs while keeping (gradients of) state variables bounded, we propose a novel architecture for recurrent neural networks. Our proposed RNN is based on a time-discretization of a system of second-order ordinary differential equations, modeling networks of controlled nonlinear oscillators. We prove precise bounds on the gradients of the hidden states, leading to the mitigation of the exploding and vanishing gradient problem for this RNN. Experiments show that the proposed RNN is comparable in performance to the state of the art on a variety of benchmarks, demonstrating the potential of this architecture to provide stable and accurate RNNs for processing complex sequential data.
1 INTRODUCTION
Recurrent neural networks (RNNs) have achieved tremendous success in a variety of tasks involving sequential (time series) inputs and outputs, ranging from speech recognition to computer vision and natural language processing, among others. However, it is well known that training RNNs to process inputs over long time scales (input sequences) is notoriously hard on account of the so-called exploding and vanishing gradient problem (EVGP) (Pascanu et al., 2013), which stems from the fact that the well-established BPTT algorithm for training RNNs requires computing products of gradients (Jacobians) of the underlying hidden states over very long time scales. Consequently, the overall gradient can grow (to infinity) or decay (to zero) exponentially fast with respect to the number of recurrent interactions.
A variety of approaches have been suggested to mitigate the exploding and vanishing gradient problem. These include adding gating mechanisms to the RNN in order to control the flow of information in the network, leading to architectures such as long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997) and gated recurring units (GRU) (Cho et al., 2014), that can overcome the vanishing gradient problem on account of the underlying additive structure. However, the gradients might still explode and learning very long term dependencies remains a challenge (Li et al., 2018). Another popular approach for handling the EVGP is to constrain the structure of underlying recurrent weight matrices by requiring them to be orthogonal (unitary), leading to the so-called orthogonal RNNs (Henaff et al., 2016; Arjovsky et al., 2016; Wisdom et al., 2016; Kerg et al., 2019) and references therein. By construction, the resulting Jacobians have eigen- and singular-spectra with unit norm, alleviating the EVGP. However as pointed out by Kerg et al. (2019), imposing such constraints on the recurrent matrices may lead to a significant loss of expressivity of the RNN resulting in inadequate performance on realistic tasks.
In this article, we adopt a different approach, based on observation that coupled networks of controlled non-linear forced and damped oscillators, that arise in many physical, engineering and biological
systems, such as networks of biological neurons, do seem to ensure expressive representations while constraining the dynamics of state variables and their gradients. This motivates us to propose a novel architecture for RNNs, based on time-discretizations of second-order systems of non-linear ordinary differential equations (ODEs) (1) that model coupled oscillators. Under verifiable hypotheses, we are able to rigorously prove precise bounds on the hidden states of these RNNs and their gradients, enabling a possible solution of the exploding and vanishing gradient problem, while demonstrating through benchmark numerical experiments, that the resulting system still retains sufficient expressivity, i.e. ability to process complex inputs, with a competitive performance, with respect to the state of the art, on a variety of sequential learning tasks.
2 THE PROPOSED RNN
Our proposed RNN is based on the following second-order system of ODEs,
y′′ = σ (Wy + Wy′ + Vu + b)− γy − y′. (1) Here, t ∈ [0, 1] is the (continuous) time variable, u = u(t) ∈ Rd is the time-dependent input signal, y = y(t) ∈ Rm is the hidden state of the RNN with W,W ∈ Rm×m, V ∈ Rm×d are weight matrices, b ∈ Rm is the bias vector and 0 < γ, are parameters, representing oscillation frequency and the amount of damping (friction) in the system, respectively. σ : R 7→ R is the activation function, set to σ(u) = tanh(u) here. By introducing the so-called velocity variable z = y′(t) ∈ Rm, we rewrite (1) as the first-order system:
y′ = z, z′ = σ (Wy + Wz + Vu + b)− γy − z. (2) We fix a timestep 0 < ∆t < 1 and define our proposed RNN hidden states at time tn = n∆t ∈ [0, 1] (while omitting the affine output state) as the following IMEX (implicit-explicit) discretization of the first order system (2):
yn = yn−1 + ∆tzn, zn = zn−1 + ∆tσ (Wyn−1 + Wzn−1 + Vun + b)−∆tγyn−1 −∆t zn̄, (3)
with either n̄ = n or n̄ = n− 1. Note that the only difference in the two versions of the RNN (3) lies in the implicit (n̄ = n) or explicit (n̄ = n− 1) treatment of the damping term − z in (2), whereas both versions retain the implicit treatment of the first equation in (2).
Motivation and background. To see that the underlying ODE (2) models a coupled network of controlled forced and damped nonlinear oscillators, we start with the single neuron (scalar) case by setting d = m = 1 in (1) and assume an identity activation function σ(x) = x. Setting W = W = V = b = = 0 leads to the simple ODE, y′′ + γy = 0, which exactly models simple harmonic motion with frequency γ, for instance that of a mass attached to a spring (Guckenheimer & Holmes, 1990). Letting > 0 in (1) adds damping or friction to the system (Guckenheimer & Holmes, 1990). Then, by introducing non-zero V in (1), we drive the system with a driving force proportional to the input signal u(t). The parameters V,b modulate the effect of the driving force, W controls the frequency of oscillations and W the amount of damping in the system. Finally, the tanh activation mediates a non-linear response in the oscillator. In the coupled network (2) with m > 1, each neuron updates its hidden state based on the input signal as well as information from other neurons. The diagonal entries of W (and the scalar hyperparameter γ) control the frequency whereas the diagonal entries of W (and the hyperparameter ) determine the amount of damping for each neuron, respectively, whereas the non-diagonal entries of these matrices modulate interactions between neurons. Hence, given this behavior of the underlying ODE (2), we term the RNN (3) as a coupled oscillatory Recurrent Neural Network (coRNN).
The dynamics of the ODE (2) (and the RNN (3)) for a single neuron are relatively straightforward. As we illustrate in Fig. 6 of supplementary material SM§C, input signals drive the generation of (superpositions of) oscillatory wave-forms, whose amplitude and (multiple) frequencies are controlled by the tunable parameters W,W,V,b. Adding a tanh activation does not change these dynamics much. This is in contrast to truncating tanh to leading non-linear order by setting σ(x) = x− x3/3, which yields a Duffing type oscillator that is characterized by chaotic behavior (Guckenheimer & Holmes, 1990). Adding interactions between neurons leads to further accentuation of this generation of superposed wave forms (see Fig. 6 in SM§C) and even with very simple network topologies, one
sees the emergence of non-trivial non-oscillatory hidden states from oscillatory inputs. In practice, a network of a large number of neurons is used and can lead to extremely rich global dynamics. Hence, we argue that the ability of a network of (forced, driven) oscillators to access a very rich set of output states may lead to high expressivity of the system, allowing it to approximate outputs from complicated sequential inputs.
Oscillator networks are ubiquitous in nature and in engineering systems (Guckenheimer & Holmes, 1990; Strogatz, 2015) with canonical examples being pendulums (classical mechanics), business cycles (economics), heartbeat (biology) for single oscillators and electrical circuits for networks of oscillators. Our motivating examples arise in neurobiology, where individual biological neurons can be viewed as oscillators with periodic spiking and firing of the action potential. Moreover, functional circuits of the brain, such as cortical columns and prefrontal-striatal-hippocampal circuits, are being increasingly interpreted by networks of oscillatory neurons, see Stiefel & Ermentrout (2016) for an overview. Following well-established paths in machine learning, such as for convolutional neural networks (LeCun et al., 2015), our focus here is to abstract the essence of functional brain circuits being networks of oscillators and design an RNN based on much simpler mechanistic systems, such as those modeled by (2), while ignoring the complicated biological details of neural function.
Related work. There is an increasing trend of basing RNN architectures on ODEs and dynamical systems. These approaches can roughly be classified into two branches, namely RNNs based on discretized ODEs and continuous-time RNNs. Examples of continuous-time approaches include neural ODEs (Chen et al., 2018) with ODE-RNNs (Rubanova et al., 2019) as its recurrent extension as well as E (2017) and references therein, to name just a few. We focus, however, in this article on an ODE-inspired discrete-time RNN, as the proposed coRNN is derived from a discretization of the ODE (1). A good example for a discrete-time ODE-based RNNs is the so-called anti-symmetric RNN of Chang et al. (2019), where the RNN architecture is based on a stable ODE resulting from a skew-symmetric hidden weight matrix, thus constraining the stable (gradient) dynamics of the network. This approach has much in common with previously mentioned unitary/orthogonal/nonnormal RNNs in constraining the structure of the hidden-to-hidden layer weight matrices. However, adding such strong constraints might reduce expressivity of the resulting RNN and might lead to inadequate performance on complex tasks. In contrast to these approaches, our proposed coRNN does not explicitly constrain the weight matrices but relies on the dynamics of the underlying ODE (and the IMEX discretization (3)), to provide gradient stability. Moreover, no gating mechanisms as in LSTMs/GRUs are used in the current version of coRNN. There is also an increasing interest in designing hybrid methods, which use a discretization of an ODE (in particular a Hamiltonian system) in order to learn the continuous representation of the data, see for instance Greydanus et al. (2019); Chen et al. (2020). Overall, our approach here differs from these papers in our use of networks of oscillators to build the RNN.
3 RIGOROUS ANALYSIS OF THE PROPOSED RNN
An attractive feature of the underlying ODE system (2) lies in the fact that the resulting hidden states (and their gradients) are bounded (see SM§D for precise statements and proofs). Hence, one can expect that a suitable discretization of the ODE (2) that preserves these bounds will not have exploding gradients. We claim that one such structure preserving discretization is given by the IMEX discretization that results in the RNN (3) and proceed to derive bounds on this RNN below.
Following standard practice we set y(0) = z(0) = 0 and purely for the simplicity of exposition, we set the control parameters, = γ = 1 and n̄ = n in (3) leading to,
yn = yn−1 + ∆tzn, zn = zn−1 1+∆t + ∆t 1+∆tσ(An−1)− ∆t1+∆tyn−1, An−1 := Wyn−1 + Wzn−1 + Vun + b. (4)
Analogous results and proofs for the case where n̄ = n− 1 and for general values of , γ are provided in SM§F.
Bounds on the hidden states. As with the underlying ODE (2), the hidden states of the RNN (3) are bounded, i.e.
Proposition 3.1 Let yn, zn be the hidden states of the RNN (4) for 1 ≤ n ≤ N , then the hidden states satisfy the following (energy) bounds:
y>n yn + z > n zn ≤ nm∆t = mtn ≤ m. (5)
The proof of the energy bound (5) is provided in SM§E.1 and a straightforward variant of the proof (see SM§E.2) yields an estimate on the sensitivity of the hidden states to changing inputs. As with the underlying ODE (see SM§D) , this bound rules out chaotic behavior of hidden states.
Bounds on hidden state gradients. We train the RNN (3) to minimize the loss function,
E := 1
N N∑ n=1 En, En = 1 2 ‖yn − ȳn‖22, (6)
with ȳ being the underlying ground truth (training data). During training, we compute gradients of the loss function (6) with respect to the weights and biases Θ = [W,W,V,b], i.e.
∂E ∂θ = 1 N N∑ n=1 ∂En ∂θ , ∀ θ ∈ Θ. (7)
Proposition 3.2 Let yn, zn be the hidden states generated by the RNN (4). We assume that the time step ∆t << 1 can be chosen such that,
max
{ ∆t(1 + ‖W‖∞)
1 + ∆t , ∆t‖W‖∞ 1 + ∆t
} = η ≤ ∆tr, 1
2 ≤ r ≤ 1. (8)
Denoting δ = 11+∆t , the gradient of the loss function E (6) with respect to any parameter θ ∈ Θ is bounded as, ∣∣∣∣∂E∂θ ∣∣∣∣ ≤ 32 (m+ Ȳ√m) , (9) with Ȳ = max
1≤n≤N ‖ȳn‖∞ be a bound on the underlying training data.
Sketch of the proof. Denoting Xn = [yn, zn], we can apply the chain rule repeatedly (for instance as in Pascanu et al. (2013)) to obtain,
∂En ∂θ
= ∑
1≤k≤n
∂En ∂Xn ∂Xn ∂Xk ∂+Xk ∂θ︸ ︷︷ ︸
∂E (k) n ∂θ
. (10)
Here, the notation ∂ +Xk ∂θ refers to taking the partial derivative of Xk with respect to the parameter θ, while keeping the other arguments constant. This quantity can be readily calculated from the structure of the RNN (4) and is presented in the detailed proof provided in SM§E.3. From (6), we can directly compute that ∂En∂Xn = [yn − ȳn, 0] . Repeated application of the chain rule and a direct calculation with (4) yields,
∂Xn ∂Xk
= ∏
k<i≤n
∂Xi ∂Xi−1 , ∂Xi ∂Xi−1 =
[ I + ∆tBi−1 ∆tCi−1
Bi−1 Ci−1
] , (11)
where I is the identity matrix and
Bi−1 = δ∆t (diag(σ ′(Ai−1))W − I) , Ci−1 = δ (I + ∆tdiag(σ′(Ai−1))W) . (12)
It is straightforward to calculate using the assumption (8) that ‖Bi−1‖∞ < η and ‖Ci−1‖∞ ≤ η+ δ. Using the definitions of matrix norms and (8), we obtain:∥∥∥∥ ∂Xi∂Xi−1 ∥∥∥∥ ∞ ≤ max (1 + ∆t(‖Bi−1‖∞ + ‖Ci−1‖∞), ‖Bi−1‖∞ + ‖Ci−1‖∞)
≤ max (1 + ∆t(δ + 2η), δ + 2η) ≤ 1 + 3∆tr. (13)
Therefore, using (11), we have∥∥∥∥∂Xn∂Xk ∥∥∥∥ ∞ ≤ ∏ k<i≤n ∥∥∥∥ ∂Xi∂Xi−1 ∥∥∥∥ ∞ ≤ (1 + 3∆tr)n−k ≈ 1 + 3(n− k)∆tr. (14)
Note that we have used an expansion around 1 and neglected terms of O(∆t2r) as ∆t << 1. We remark that the bound (13) is the crux of our argument about gradient control as we see from the structure of the RNN that the recurrent matrices have close to unit norm. The detailed proof is presented in SM§E.3. As the entire gradient of the loss function (6), with respect to the weights and biases of the network, is bounded above in (9), the exploding gradient problem is mitigated for this RNN.
On the vanishing gradient problem. The vanishing gradient problem (Pascanu et al., 2013) arises if ∣∣∣∂E(k)n∂θ ∣∣∣, defined in (10),→ 0 exponentially fast in k, for k << n (long-term dependencies). In that case, the RNN does not have long-term memory, as the contribution of the k-th hidden state to error at time step tn is infinitesimally small. We already see from (14) that
∥∥∥∂Xn∂Xk ∥∥∥∞ ≈ 1 (independently of k). Thus, we should not expect the products in (10) to decay fast. In fact, we will provide a much more precise characterization of this gradient. To this end, we introduce the following order-notation,
β = O(α), for α, β ∈ R+ if there exists constants C,C such that Cα ≤ β ≤ Cα. M = O(α), for M ∈ Rd1×d2 , α ∈ R+ if there exists constant C such that ‖M‖ ≤ Cα. (15)
For simplicity of notation, we will also set ȳn = un ≡ 0, for all n, b = 0 and r = 1 in (8) and we will only consider θ = Wi,j for some 1 ≤ i, j ≤ m in the following proposition.
Proposition 3.3 Let yn be the hidden states generated by the RNN (4). Under the assumption that yin = O( √ tn), for all 1 ≤ i ≤ m and (8), the gradient for long-term dependencies satisfies,
∂E (k) n
∂θ = O
( ĉδ∆t 3 2 ) +O ( ĉδ(1 + δ)∆t 5 2 ) +O(∆t3), ĉ = sech2 (√ k∆t(1 + ∆t) ) , k << n.
(16)
This precise bound (16) on the gradient shows that although the gradient can be small, i.e O(∆t 32 ), it is in fact independent of k, ensuring that long-term dependencies contribute to gradients at much later steps and mitigating the vanishing gradient problem. The detailed proof is presented in SM§E.5.
Summarizing, we see that the RNN (3) indeed satisfied similar bounds to the underlying ODE (2) that resulted in upper bounds on the hidden states and its gradients. However, the lower bound on the gradient (16) is due to the specific choice of this discretization and does not appear to have a continuous analogue, making the specific choice of discretization of (2) crucial for mitigating the vanishing gradient problem.
4 EXPERIMENTS
We present results on a variety of learning tasks with coRNN (3) with n̄ = n − 1, as this version resulted in marginally better performance than the version with n̄ = n. Details of the training procedure for each experiment can be found in SM§B. We wish to clarify here that we use a straightforward hyperparameter tuning protocol based on a validation set and do not use additional performance enhancing tools, such as dropout (Srivastava et al., 2014), gradient clipping (Pascanu et al., 2013) or batch normalization (Ioffe & Szegedy, 2015), which might further improve the performance of coRNNs.
Adding problem. We start with the well-known adding problem (Hochreiter & Schmidhuber, 1997), proposed to test the ability of an RNN to learn (very) long-term dependencies. The input is a two-dimensional sequence of length T , with the first dimension consisting of random numbers drawn from U([0, 1]) and with two non-zero entries (both set to 1) in the second dimension, chosen at random locations, but one each in both halves of the sequence. The output is the sum of two numbers
0 100 200 300 400 500
Training steps (hundreds)
0.000
0.025
0.050
0.075
0.100
0.125
0.150
0.175
0.200
M S
E
T = 500
0 100 200 300 400 500
Training steps (hundreds)
0.000
0.025
0.050
0.075
0.100
0.125
0.150
0.175
0.200
M S
E
T = 2000
coRNN expRNN FastRNN anti.sym. RNN tanh RNN Baseline
0 100 200 300 400 500
Training steps (hundreds)
0.000
0.025
0.050
0.075
0.100
0.125
0.150
0.175
0.200
M S
E
T = 5000
Figure 1: Results of the adding problem for coRNN, expRNN, FastRNN, anti.sym. RNN and tanh RNN based on three different sequence lengths T , i.e. T = 500, T = 2000 and T = 5000.
of the first dimension at positions, corresponding to the two 1 entries in the second dimension. We compare the proposed coRNN to three recently proposed RNNs, which were explicitly designed to learn LTDs, namely the FastRNN (Kusupati et al., 2018), the antisymmetric (anti.sym.) RNN (Chang et al., 2019) and the expRNN (Lezcano-Casado & Martínez-Rubio, 2019), and to a plain vanilla tanh RNN, with the goal of beating the baseline mean square error (MSE) of 0.167 (which stems from the variance of the baseline output 1). All methods have 128 hidden units (dimensionality of the hidden state y) and the same training protocol is used in all cases. Fig. 1 shows the results for different lengths T of the input sequences. We can see that while the tanh RNN is not able to beat the baseline for any sequence length, the other methods successfully learn the adding task for T = 500. However, in this case, coRNN converges significantly faster and reaches a lower test MSE than other tested methods. When setting the length to the much more challenging case of T = 2000, we see that only coRNN and the expRNN beat the baseline. However, the expRNN fails to reach a desired test MSE of 0.01 within training time. In order to further demonstrate the superiority of coRNN over recently proposed RNN architectures for learning LTDs, we consider the adding problem for T = 5000 and observe that coRNN converges very quickly even in this case, while expRNN fails to consistently beat the baseline. We thus conclude that the coRNN mitigates the vanishing/exploding gradient problem even for very long sequences.
Sequential (permuted) MNIST. Sequential MNIST (sMNIST) (Le et al., 2015) is a benchmark for RNNs, in which the model is required to classify an MNIST (LeCun et al., 1998) digit one pixel at a time leading to a classification task with a sequence length of T = 784. In permuted sequential MNIST (psMNIST), a fixed random permutation is applied in order to increase the time-delay between interdependent pixels and to make the problem harder. In Table 1, we compare the test accuracy for coRNN on sMNIST and psMNIST with recently published best case results for other recurrent models, which were explicitly designed to solve long-term dependencies together with baselines corresponding to gated and unitary RNNs. To the best of our knowledge the proposed coRNN outperforms all single-layer recurrent architectures, published in the literature, for both the sMNIST and psMNIST. Moreover in Fig. 2, we present the performance (with respect to number of epochs) of different RNN architectures for psMNIST with the same fixed random permutation and the
same number of hidden units, i.e. 128. As seen from this figure, coRNN clearly outperforms the other architectures, some of which were explicitly designed to learn LTDs, handily for this permutation.
Noise padded CIFAR-10. Another challenging test problem for learning LTDs is the recently proposed noise padded CIFAR-10 experiment by Chang et al. (2019), in which CIFAR-10 data points (Krizhevsky et al., 2009) are fed to the RNN row-wise and flattened along the channels resulting in sequences of length 32. To test the long term memory, entries of uniform random numbers are added such that the resulting sequences have a length of 1000, i.e. the last 968 entries of each sequence are only noise to distract the network. Table 2 shows the result for coRNN together with other recently published best case results. We observe that coRNN readily outperforms other RNN architectures on this benchmark, while requiring only 128 hidden units.
Human activity recognition. This experiment is based on the human activity recognition data set provided by Anguita et al. (2012). The data set is a collection of tracked human activities, which were measured by an accelerometer and gyroscope on a Samsung Galaxy S3 smartphone. Six activities were binarized to obtain two merged classes {Sitting, Laying, Walking_Upstairs} and {Standing, Walking, Walking_Downstairs}, leading to the HAR-2 data set, which was first proposed in Kusupati et al. (2018). Table 3 shows the result for coRNN together with other very recently published best case results on the same data set. We can see that coRNN readily outperforms all other methods. We also ran this experiment on a tiny coRNN with very few parameters, i.e. only 1k. We can see that even in this case, the tiny coRNN beats all baselines. We thus conclude that coRNN can efficiently be used on resource-constrained IoT micro-controllers.
IMDB sentiment analysis. The IMDB data set (Maas et al., 2011) is a collection of 50k movie reviews, where 25k reviews are used for training (with 7.5k of these reviews used for validating) and 25k reviews are used for testing. The aim of this binary sentiment classification task is to decide whether a movie review is positive or negative. We follow the standard procedure by initializing the word embedding with pretrained 100d GloVe (Pennington et al., 2014) vectors and restrict the
dictionary to 25k words. Table 4 shows the results for coRNN and other recently published models, which are trained similarly and have the same number of hidden units, i.e. 128. We can see that coRNN compares favorable with gated baselines (which are known to perform very well on this task), while at the same time requiring significantly less parameters.
Further experimental results. To shed further light on the performance of coRNN, we consider the following issues. First, the theory suggested that coRNN mitigates the exploding/vanishing gradient problem as long as the assumptions (8) on the time step ∆t and weight matrices W,W hold. Clearly one can choose a suitable ∆t to enforce (8) before training, but do these assumptions remain valid during training? In SM§E.4, we argue, based on worst-case estimates, that the assumptions will remain valid for possibly a large number of training steps. More pertinently, we can verify experimentally that (8) holds during training. This is demonstrated in Fig. 3, where we show that (8) holds for all LTD tasks during training. Thus, the presented theory applies and one can expect control over hidden state gradients with coRNN. Next, we recall that the frequency parameter γ and damping parameter play a role for coRNNs (see SM§F for the theoretical dependence and Table 8 for best performing values of , γ for each numerical experiment within the range considered in Table 7). How sensitive is the performance of coRNN to the choice of these 2 parameters? To investigate this dependence, we focus on the noise padded CIFAR-10 experiment and show the results of an ablation study in Fig. 4, where the test accuracy for different coRNNs based on a two dimensional hyperparameter grid ( , γ) ∈ [0.8, 1.8]× [5.7, 17, 7] (i.e., sufficiently large intervals around the best performing values of , γ from Table 8) is plotted. We observe from the figure that although there are reductions in test accuracy for non-optimal values of ( , γ), there is no large variation and the performance is rather robust with respect to these hyperparameters. Finally, note that we follow standard practice and present best reported results with coRNN as well as other competing RNNs in order to compare the relative performance. However, it is natural to investigate the dependence of these best results on the random initial (before training) values of the weight matrices. To this end, in Table 5 of SM, we report the mean and standard deviation (over 10 retrainings) of the test accuracy with coRNN on various learning tasks and find that the mean value is comparable to the best reported value, with low standard deviations. This indicates further robustness of the performance of coRNNs.
5 DISCUSSION
Inspired by many models in physics, biology and engineering, we proposed a novel RNN architecture (3) based on a model (1) of a network of controlled forced and damped oscillators. For this RNN, we rigorously showed that under verifiable hypotheses on the time step and weight matrices, the hidden states are bounded (5) and obtained precise bounds on the gradients (Jacobians) of the hidden states, (9) and (16). Thus by design, this architecture can mitigate the exploding and vanishing gradient problem (EVGP) for RNNs. We present a series of numerical experiments that include sequential image classification, activity recognition and sentiment analysis, to demonstrate that the proposed coRNN keeps hidden states and their gradients under control, while retaining sufficient expressivity to perform complex tasks. Thus, we provide a novel and promising strategy for designing RNN architectures that are motivated by the functioning of natural systems, have rigorous bounds on hidden state gradients and are robust, accurate, straightforward to train and cheap to evaluate.
This work can be extended in different directions. For instance in this article, we have mainly focused on the learning of tasks with long-term dependencies and observed that coRNNs are comparable in performance to the best published results in the literature. Given that coRNNs are built with networks of oscillators, it is natural to expect that they will perform very well on tasks with oscillatory inputs/outputs, such as the time series analysis of high-resolution biomedical data, for instance EEG (electroencephalography) and EMG (electromyography) data and seismic activity data from geoscience. This will be pursued in a follow-up article. Similarly, applications of coRNN to language modeling will be covered in future work.
However, it is essential to point out that coRNNs might not be suitable for every learning task involving sequential inputs/outputs. As a concrete example, we consider the problem of predicting time series corresponding to a chaotic dynamical system. We recall that by construction, the underlying ODE (2) (and the discretization (3)) do not allow for super-linear (in time) separation of trajectories for nearby inputs. Thus, we cannot expect that coRNNs will be effective at predicting chaotic time series and it is indeed investigated and demonstrated for a Lorenz-96 ODE in SM§A, where we observe that the coRNN is outperformed by LSTMs in the chaotic regime.
Our main theoretical focus in this paper was to demonstrate the possible mitigation of the exploding and vanishing gradient problem. On the other hand, we only provided some heuristics and numerical evidence on why the proposed RNN still has sufficient expressivity. A priori, it is natural to think that the proposed RNN architecture might introduce a strong bias towards oscillatory functions. However, as we argue in SM§C, the proposed coRNN can be significantly more expressive, as the damping, forcing and coupling of several oscillators modulates nonlinear response to yield a very rich and diverse set of output states. This is also evidenced by the ability of coRNNs to deal with many tasks in our numerical experiments, which do not have an explicit oscillatory structure. This sets the stage for a rigorous investigation of universality of the proposed coRNN architecture, as in the case of echo state networks in Grigoryeva & Ortega (2018). A possible approach would be to leverage the ability of the proposed RNN to convert general inputs into a rich set of superpositions of harmonics (oscillatory wave forms). Moreover, the proposed RNN was based on the simplest model of coupled oscillators (1). Much more detailed models of oscillators are available, particularly those that arise in the modeling of biological neurons, Stiefel & Ermentrout (2016) and references therein. An interesting variant of our proposed RNN would be to base the RNN architecture on these more elaborate models, resulting in analogues of the spiking neurons model of Maass (2001) for RNNs.
Supplementary Material for:
Coupled Oscillatory Recurrent Neural Network (coRNN): An accurate and (gradient) stable
architecture for learning long time dependencies
A CHAOTIC TIME-SERIES PREDICTION.
According to proposition E.1, coRNN does not exhibit chaotic behavior by design. While this property is highly desirable for learning long-term dependencies (a slight perturbation of the input should not result in an unbounded perturbation of the prediction), it impairs the performance on tasks, where the network has to learn actual chaotic dynamics. To test this numerically, we consider the following version of the Lorenz 96 system: (Lorenz, 1996):
x′j = (xi+1 − xi−2)xi−1 − xi + F, (17)
where xj ∈ R for all j = 1, . . . , 5 and F is an external force controlling the level of chaos in the system. Fig. 5 shows a trajectory of the system (17) plotted on the x1x2-plane for a small external
force of F = 0.9 as well as a trajectory for a large external force of F = 8. We can see that while for F = 0.9 the system does not exhibit chaotic behavior, the dynamics for F = 8 is already highly chaotic.
Our task consists of predicting the 25-th next state of a trajectory of the system (17). We provide 128 trajectories of length 2000 for each of the training, validation and test sets. The trajectories are generated by numerically solving the system (17) and evaluating it at 2000 equidistantly distributed discrete time points with distance 0.01. The initial value for each trajectory is chosen uniform at random on [F − 1/2, F + 1/2]5 around the equilibrium point (F, . . . , F ) of the system (17). Since LSTMs are known to be able to produce chaotic dynamics, even in the autonomous (zero-entry) case (Laurent & von Brecht, 2017), we expect them to perform significantly better than coRNN if the underlying system exhibits strong chaotic behavior. Table 6 shows the normalized root mean square error (NRMSE) (RMSE divided by the root mean square of the target trajectory) on the test set for coRNN and LSTM. We can see that indeed for the non-chaotic case of using an external force of F = 0.9 LSTM and coRNN perform similarly. However, when the dynamics get chaotic (in this case using an external force of F = 8), the LSTM clearly outperforms coRNN.
B TRAINING DETAILS
The IMDB task was conducted on an NVIDIA GeForce GTX 1080 Ti GPU, while all other experiments were run on a Intel Xeon E3-1585Lv5 CPU. The weights and biases of coRNN are randomly initialized according to U(− 1√nin , 1√ nin
), where nin denotes the input dimension of each affine transformation. Instead of treating the parameters ∆t, γ and as fixed hyperparameters, we can also treat them as trainable network parameters by constraining ∆t to [0, 1] by using a sigmoidal activation function and , γ > 0 by the use of ReLU for instance. However, in this case no major difference in performance is obtained. The hyperparameters are optimized with a random search algorithm, where the results of the best performing coRNN (based on the validation set) are reported. The ranges of the hyperparameters for the random search algorithm are provided in Table 7. Table 8 shows the rounded hyperparameters of the best performing coRNN architecture resulting from the random search algorithm for each learning task. We used 100 training epochs for sMNIST, psMNIST and noise padded CIFAR-10 with additional 20 epochs in which the learning rate was reduced by a factor of 10. Additionally, we used 100 epochs for the IMDB task and 250 epochs for the HAR-2 task.
C HEURISTICS OF NETWORK FUNCTION
At the level of a single neuron, the dynamics of the RNN is relatively straightforward. We start with the scalar case, i.e. m = d = 1 and illustrate different hidden states y as a function of time, for different input signals, in Fig. 6. In this figure, we consider two different input signals, one oscillatory signal given by u(t) = cos(4t) and another is a combination of step functions. First, we plot the solution y(t) of (1), with the parameters V,b,W,W, = 0 and γ = 1. This simply corresponds to the case of a simple harmonic oscillator (SHO) and the solution is described by a sine wave with the natural frequency of the oscillator. Next, we introduce forcing by the input signal by setting V = 1 and the activation function is the identity σ(x) = x, leading to a forced damped oscillator (FDO). As seen from Fig. 6, in the case of an oscillatory signal, this leads to a very minor change over the SHO,
whereas for the step function, the change is only in the amplitude of the wave. Next, we add damping by setting = 0.25 and see that the resulting forced damped oscillator (FDO), merely damps the amplitude of the waves, without changing their frequency. Then, we consider the case of controlled oscillator (CFDO) by setting W = −2,V = 2,b = 0.25,W = 0.75. As seen from Fig. 6, this leads to a significant change in the wave form in both cases. For the oscillatory input, the output is now a superposition of many different forms, with different amplitudes and frequencies (phases) whereas for the step function input, the phase is shifted. Already, we can see that for a linear controlled oscillator, the output can be very complicated with the superposition of different waves. This holds true when the activation function is set to σ(x) = tanh(x) (which is our proposed coRNN). For both inputs, the output is a modulated version of the one generated by CFDO, expressed as a superposition of waves. On the other hand, we also plot the solution with a Duffing type oscillator (DUFF) by setting the activation function as,
σ(x) = x− x 3
3 . (18)
In this case, the solution is very different from the CFDO and coRNN solutions and is heavily damped (either in the output or its derivative). On the other hand, given the chaotic nature of the dynamical system in this case, a slight change in the parameters led to the output blowing up. Thus, a bounded nonlinearity seems essential in this context.
Coupling neurons together further accentuates this generation of superpositions of different waveforms, as seen even with the simplest case of a network with two neurons, shown in Fig. 6 (Bottom row). For this figure, we consider two neurons, i.e m = 2 and two different network topologies. For the first, we only allow the first neuron to influence the second one and not vice versa. This is enforced with the weight matrices,
W = [ −2 0 3 −2 ] , W = [ 0.75 0 −1 0.75 ] .
We also set V = [2, 2]>,b = [0.25, 0.25]>. Note that in this case (we name as ORD (for ordered connections)), the output of the first neuron should be exactly the same as in the uncoupled (UC) case, whereas there is a distinct change in the output of the second neuron and we see that the first neuron has modulated a sharp change in the resulting output wave form. It is well illustrated by the emergence of an approximation to the step function (Bottom Right of Fig. 6), even though the input signal is oscillatory.
Next, we consider the case of fully connected (FC) neurons by setting the weight matrices as,
W = [ −2 1 3 −2 ] , W = [ 0.75 0.3 −1 0.75 ] .
The resulting outputs for the first neuron are now slightly different from the uncoupled case. On the the other hand, the approximation of step function output for the second neuron is further accentuated.
Even these simple examples illustrate the functioning of a network of controlled oscillators well. The input signal is converted into a superposition of waves with different frequencies and amplitudes, with these quantities being controlled by the weights and biases in (1). Thus, very complicated outputs can be generated by modulating the number, frequencies and amplitudes of the waves. In practice, a network of a large number of neurons is used and can lead to extremely rich global dynamics, along the lines of emergence of synchronization or bistable heterogeneous behavior seen in systems of idealized oscillators and explained by their mean field limit, see H. Sakaguchi & Kuramoto (1987); Winfree (1967); Strogatz (2001). Thus, we argue that the ability of the network of (forced, driven) oscillators to access a very rich set of output states can lead to high expressivity of the system. The training process selects the weights that modulate frequencies, phases and amplitudes of individual neurons and their interaction to guide the system to its target output.
D BOUNDS ON THE DYNAMICS OF THE ORDINARY DIFFERENTIAL EQUATION
(1)
In this section, we present bounds that show how the continuous time dynamics of the ordinary differential equation (2), modeling non-linear damped and forced networks of oscillators, is constrained. We start with the following estimate on the energy of the solutions of the system (2).
Proposition D.1 Let y(t), z(t) be the solutions of the ODE system (2) at any time t ∈ [0, T ] and assume that the damping parameter ≥ 12 and the initial data for (2) is given by, y(0) = z(0) ≡ 0. Then, the solutions are bounded as,
y(t)>y(t) ≤ mt γ , z(t)>z(t) ≤ mt, ∀t ∈ (0, T ]. (19)
To prove this proposition, we multiply the first equation in (2) with y(t)> and the second equation in (2) with 1γ z(t) > to obtain,
d
dt
( y(t)>y(t)
2 +
z(t)>z(t)
2γ
) = z(t)>σ(A(t))
γ − γ z(t)>z(t), (20)
with A(t) = Wy(t) + Wz(t) + Vu(t) + b.
Using the elementary Cauchy’s inequality repeatedly in (20) results in,
d
dt
( y(t)>y(t)
2 +
z(t)>z(t)
2γ
) ≤ σ(A) >σ(A)
2γ +
1
γ
( 1 2 − ) z>z
≤ m 2γ (as |σ| ≤ 1 and ≥ 1 2 ).
Integrating the above inequality over the time interval [0, t] and using the fact that the initial data are y(0) = z(0) ≡ 0, we obtain the bounds (19). The above proposition and estimate (19) clearly demonstrate that the dynamics of the network of coupled non-linear oscillators (1) is bounded. The fact that the nonlinear activation function σ = tanh is uniformly bounded in its arguments played a crucial role in deriving the energy bound (19). A straightforward adaptation of this argument leads to the following proposition about the sensitivity of the system to inputs,
Proposition D.2 Let y(t), z(t) be the solutions of the ODE system (2) with respect to the input signal u(t). Let ȳ(t), z̄(t) be the solutions of the ODE system (2), but with respect to the input signal ū(t). Assume that the damping parameter ≥ 12 and the initial data are given by,
y(0) = z(0) = ȳ(0) = z̄(0) ≡ 0. Then we have the following bound,
(y(t)− ȳ(t))> (y(t)− ȳ(t)) ≤ 4mt γ , (z(t)− z̄(t))> (z(t)− z̄(t)) ≤ 4mt, ∀t ∈ (0, T ].
(21)
Thus from the bound (21), there can be atmost linear separation (in time) with respect to the trajectories of the ODE (2) for different input signals. Hence, chaotic behavior, which is characterized by the (super-)exponential separation of trajectories is ruled out by the structure of the ODE system (2). Note that this property of the ODE system was primarily a result of the uniform boundedness of the activation function σ. Using a different activation function such as ReLU might enable to obtain an exponential separation of trajectories that is a prerequisite for a chaotic dynamical system.
D.1 GRADIENT DYNAMICS FOR THE ODE SYSTEM (2)
Let θ denote the i, j-th entry of the Weight matrices W,W,V or the i-th entry of the bias vector b. We are interested in finding out how the gradients of the hidden state y (and the auxiliary hidden state z) with respect to parameter θ, vary with time. Note that these gradients are precisely the objects of interest in the training of an RNN, based on a discretization of the ODE system (2). To this end, we differentiate (2) with respect to the parameter θ and denote
yθ(t) = ∂y
∂θ (t), zθ(t) =
∂z ∂θ (t),
to obtain, y′θ = zθ,
z′θ = diag(σ ′(A)) [Wyθ + Wzθ] + Z i,j m,m̄(A)ρ− γyθ − zθ.
(22)
As introduced before, Zi,jm,m̄(A) ∈ Rm×m̄ is a matrix with all elements are zero except for the (i, j)-th entry which is set to σ′(A(t))i, i.e. the i-th entry of σ′(A), and we have,
ρ = y, m̄ = m, if θ = Wi,j ,
ρ = z, m̄ = m, if θ = Wi,j ,
ρ = u, m̄ = d, if θ = Vi,j ,
ρ = 1, m̄ = 1, if θ = bi.
We see from (22) that the ODEs governing the gradients with respect to the parameter θ also represent a system of oscillators but with additional coupling and forcing terms, proportional to the hidden states y, z or input signal u. As we have already proved with estimate (19) that the hidden states are always bounded and the input signal is assumed to be bounded, it is natural to expect that the gradients of the states with respect to θ are also bounded. We make this statement explicit in the following proposition, which for simplicity of exposition, we consider the case of θ = Wi,j , as the other values of θ are very similar in their behavior.
Proposition D.3 Let θ = Wi,j and y, z be the solutions of the ODE system (2). Assume that the weights and the damping parameter satisfy,
‖W‖∞ + ‖W‖∞ ≤ , then we have the following bounds on the gradients,
yθ(t) >yθ(t) +
1
γ
( zθ(t) >zθ(t) ) ≤ [ yθ(0) >yθ(0) + 1
γ
( zθ(0) >zθ(0) )] eCt + mt2
2γ2 , t ∈ (0, T ],
C = max {‖W‖1 γ , 1 + ‖W‖1 } .
(23)
The proof of this proposition follows exactly along the same lines as the proof of proposition D.1 and we skip the details, while noting the crucial role played by the energy bound (19).
We remark that the bound (23) indicates that as long as the initial gradients with respect to θ are bounded and the weights are controlled by the damping parameter, the hidden state gradients remain bounded in time.
E SUPPLEMENT TO THE RIGOROUS ANALYSIS OF CORNN
In this section, we supplement the section on the rigorous analysis of the proposed RNN (4). We start with
E.1 PROOF OF PROPOSITION 3.1
We multiply (y>n−1, z > n ) to (3) and use the elementary identities,
a>(a− b) = a >a
2 − b
>b 2 + 1 2 (a− b)>(a− b), b>(a− b) = a >a 2 − b >b 2 − 1 2 (a− b)>(a− b),
to obtain the following,
y>n yn + z > n zn
2 =
y>n−1yn−1 + z > n−1zn−1
2 + (yn − yn−1)>(yn − yn−1) 2
− (zn − zn−1) >(zn − zn−1) 2 + ∆tz>n σ(An−1)−∆tz>n zn ≤ y > n−1yn−1 + z > n−1zn−1
2 + ∆t (1/2 + ∆t/2− 1) z>n zn +
∆t
2 σ>(An−1)σ(An−1)
≤ y > n−1yn−1 + z > n−1zn−1
2 + m∆t 2 as σ2 ≤ 1 and > ∆t << 1.
Iterating the above inequality n times leads to the energy bound,
y>n yn + z > n zn ≤ y>0 y0 + z>0 z0 + nm∆t = mtn, (24)
as y0 = z0 = 0.
E.2 SENSITIVITY TO INPUTS
Next, we examine how changes in the input signal u affect the dynamics. We have the following proposition:
Proposition E.1 Let yn, zn be the hidden states of the trained RNN (4) with respect to the input u = {un}Nn=1 and let yn, zn be the hidden states of the same RNN (4), but with respect to the input u = {un}Nn=1, then the differences in the hidden states are bounded by,
(yn − yn)> (yn − yn) + (zn − zn)> (zn − zn) ≤ 4mtn. (25)
The proof of this proposition is completely analogous to the proof of proposition 3.1, we subtract
yn = yn−1 + ∆tzn, zn = zn−1 1+∆t + ∆t 1+∆tσ(An−1)− ∆t1+∆tyn−1, An−1 := Wyn−1 + Wzn−1 + Vun + b.
(26) from (4) and multiply ( (yn − yn)> , (zn − zn)> ) to the difference. The estimate (25) follows
identically to the proof of (5) (presented above) by realizing that σ(An−1)− σ(An−1) ≤ 2. Note that the bound (25) ensures that the hidden states can only separate linearly in time for changes in the input. Thus, chaotic behavior, such as for Duffing type oscillators, characterized by at least exponential separation of trajectories, is ruled out for this proposed RNN, showing that it is stable with respect to changes in the input. This is largely on account of the fact that the activation function σ in (3) is globally bounded.
E.3 PROOF OF PROPOSITION 3.2
From (6), we readily calculate that,
∂En ∂Xn = [yn − ȳn, 0] . (27)
Similarly from (3), we calculate,
∂+Xk ∂θ = [( ∆t2 1+∆tZ i,j m,m(Ak−1)yk−1 )> , ( ∆t 1+∆tZ i,j m,m(Ak−1)yk−1 )>]> if θ = (i, j)−th entry of W,[( ∆t2 1+∆tZ i,j m,m(Ak−1)zk−1 )> , ( ∆t 1+∆tZ i,j m,m(Ak−1)zk−1 )>]> if θ = (i, j)−th entry of W,[( ∆t2 1+∆tZ i,j m,d(Ak−1)uk )> , ( ∆t 1+∆tZ i,j m,d(Ak−1)uk )>]> if θ = (i, j)−th entry of V,[( ∆t2
1+∆tZ i,1 m,1(Ak−1)
)> , (
∆t 1+∆tZ i,1 m,1(Ak−1)
)>]> if θ = i−th entry of b,
(28) where Zi,jm,m̄(Ak−1) ∈ Rm×m̄ is a matrix with all elements are zero except for the (i, j)-th entry which is set to σ′(Ak−1)i, i.e. the i-th entry of σ′(Ak−1). We easily see that ‖Zi,jm,m̄(Ak−1)‖∞ ≤ 1 for all i, j,m, m̄ and all choices of Ak−1.
Now, using definitions of matrix and vector norms and applying (14) in (10), together with (27) and (28), we obtain the following estimate on the norm:
∣∣∣∣∣∂E(k)n∂θ ∣∣∣∣∣ ≤ (‖yn‖∞ + ‖ȳn‖∞)(1 + 3(n− k)∆tr)δ∆t‖yk−1‖∞, if θ is entry of W, (‖yn‖∞ + ‖ȳn‖∞)(1 + 3(n− k)∆tr)δ∆t‖zk−1‖∞, if θ is entry of W, (‖yn‖∞ + ‖ȳn‖∞)(1 + 3(n− k)∆tr)δ∆t‖uk‖∞, if θ is entry of V, (‖yn‖∞ + ‖ȳn‖∞)(1 + 3(n− k)∆tr)δ∆t, if θ is entry of b. (29)
We will estimate the above term, just for the case of θ is an entry of W, the rest of the terms are very similar to estimate.
For simplicity of notation, we let k − 1 ≈ k and aim to estimate the term,∣∣∣∣∣∂E(k)n∂θ ∣∣∣∣∣ ≤ ‖yn‖∞‖yk‖∞(1 + 3(n− k)∆tr)δ∆t+ ‖ȳn‖∞‖yk‖∞(1 + 3(n− k)∆tr)δ∆t ≤ m √ nk∆t(1 + 3(n− k)∆tr)δ∆t+ ‖ȳn‖∞ √ mk √
∆t(1 + 3(n− k)∆tr)δ∆t (by (5)) ≤ m √ nkδ∆t2 + 3m √ nk(n− k)δ∆tr+2 + ‖ȳn‖∞ √ mk √
∆t(1 + 3(n− k)∆tr)δ∆t. (30)
To further analyze the above estimate, we recall that n∆t = tn ≤ 1 and consider two different regimes. Let us start by considering short-term dependencies by letting k ≈ n, i.e n− k = c with constant c ∼ O(1), independent of n, k. In this case, a straightforward application of the above assumptions in the bound (30) yields,∣∣∣∣∣∂E(k)n∂θ
∣∣∣∣∣ ≤ m√nkδ∆t2 + 3m√nk(n− k)δ∆tr+2 + ‖ȳn‖∞√m√tnδ∆t+ ‖ȳn‖∞√m√tncδ∆tr+1 ≤ mtnδ∆t+mctnδ∆tr+1 + ‖ȳn‖∞ √ m √ tnδ∆t+ ‖ȳn‖∞ √ m √ tncδ∆t r+1
≤ tnmδ∆t+ ‖ȳn‖∞ √ m √ tnδ∆t (for ∆t << 1 as r ≥ 1/2) ≤ mδ∆t+ ‖ȳn‖∞ √ mδ∆t.
(31)
Next, we consider long-term dependencies by setting k << n and estimating,∣∣∣∣∣∂E(k)n∂θ ∣∣∣∣∣ ≤ m√nkδ∆t2 + 3m√nk(n− k)δ∆tr+2 + ‖ȳn‖∞√mδ∆t 32 + 3‖ȳn‖∞√mnδ∆tr+ 32 ≤ m √ tnδ∆t 3 2 + 3mt 3 2 n δ∆t r+ 12 + ‖ȳn‖∞ √ mδ∆t 3 2 + 3‖ȳn‖∞ √ mtnδ∆t r+ 12
≤ mδ∆t 32 + 3mδ∆tr+ 12 + ‖ȳn‖∞ √ mδ∆t 3 2 + 3‖ȳn‖∞ √ mδ∆tr+ 1 2 (as tn < 1) ≤ 3mδ∆tr+ 12 + 3‖ȳn‖∞ √ mδ∆tr+ 1 2 (as r ≤ 1 and ∆t << 1).
(32) Thus, in all cases, we have that,∣∣∣∣∣∂E(k)n∂θ ∣∣∣∣∣ ≤ 3δ∆t (m+√m‖ȳn‖∞) (as r ≥ 1/2). (33) Applying the above estimate in (10) allows us to bound the gradient by,∣∣∣∣∂En∂θ ∣∣∣∣ ≤ ∑ 1≤k≤n ∣∣∣∣∣∂E(k)n∂θ ∣∣∣∣∣ ≤ 3δtn (m+√m‖ȳn‖∞) . (34)
Therefore, the gradient of the loss function (6) can be bounded as,∣∣∣∣∂E∂θ ∣∣∣∣ ≤ 1N N∑ n=1 ∣∣∣∣∂En∂θ ∣∣∣∣
≤ 3δ [ m∆t
N N∑ n=1 n+ √ m∆t N N∑ n=1
‖ȳn‖∞n ]
≤ 3δ [ m∆t
N N∑ n=1 n+ √ mȲ∆t N N∑ n=1 n
]
≤ 3 2 δ(N + 1)∆t
( m+ Ȳ √ m )
≤ 3 2 δ(tN + ∆t)
( m+ Ȳ √ m )
≤ 3 2 δ(1 + ∆t)
( m+ Ȳ √ m )
(as tN = 1)
≤ 3 2
( m+ Ȳ √ m ) ,
(35)
which is the desired estimate (9).
E.4 ON THE ASSUMPTION (8) AND TRAINING
Note that all the estimates were based on the fact that we were able to choose a time step ∆t in (3) that enforces the condition (8). For any fixed weights W,W, we can indeed choose such a value of to satisfy (8). However, we train the RNN to find the weights that minimize the loss function (6). Can we find a hyperparameter ∆t such that (8) is satisfied at every step of the stochastic gradient descent method for training?
To investigate this issue, we consider a simple gradient descent method of the form:
θ`+1 = θ` − ζ ∂E
∂θ (θ`). (36)
Note that ζ is the constant (non-adapted) learning rate. We assume for simplicity that θ0 = 0 (other choices lead to the addition of a constant). Then, a straightforward estimate on the weight is given by,
|θ`+1| ≤ |θ`|+ ζ ∣∣∣∣∂E∂θ (θ`) ∣∣∣∣ ≤ |θ`|+ ζ 3
2
( m+ Ȳ √ m ) (by (35))
≤ |θ0|+ `ζ 3
2
( m+ Ȳ √ m ) = `ζ 3
2
( m+ Ȳ √ m ) .
(37)
In order to calculate the minimum number of steps L in the gradient descent method (36) such that the condition (8) is satisfied, we set ` = L in (37) and applying it to the condition (8) leads to the straightforward estimate,
L ≥ 1 ζ 32 ( m+ Ȳ √ m ) m∆t1−rδ . (38)
Note that the parameter δ < 1, while in general, the learning rate ζ << 1. Thus, as long as r ≤ 1, we see that the assumption (8) holds for a large number of steps of the gradient descent method. We remark that the above estimate (38) is a large underestimate on L. In the experiments presented in this article, we are able to take a very large number of training steps, while the gradients remain within a range (see Fig. 3).
E.5 PROOF OF PROPOSITION 3.3
We start with the following decomposition of the recurrent matrices:
∂Xi ∂Xi−1 = Mi−1 + ∆tM̃i−1,
Mi−1 :=
[ I ∆tCi−1
Bi−1 Ci−1
] , M̃i−1 := [ Bi−1 0
0 0
] ,
with B,C defined in (12). By the assumption (8), one can readily check that ‖M̃i−1‖∞ ≤ ∆t, for all k ≤ i ≤ n− 1. We will use an induction argument to show the following representation formula for the product of Jacobians,
∂Xn ∂Xk
= ∏
k<i≤n
∂Xi ∂Xi−1 = I ∆t n−1∑ j=k k∏ i=j Ci
Bn−1 + k∑
j=n−2 ( j+1∏ i=n−1 Ci ) Bj
k∏ i=n−1 Ci +O(∆t). (39) We start by the outermost product and calculate,
∂Xn ∂Xn−1 ∂Xn−1 ∂Xn−2
= ( Mn−1 + ∆tM̃n−1 )( Mn−2 + ∆tM̃n−2 ) = Mn−1Mn−2 + ∆t(M̃n−1Mn−2 +Mn−1M̃n−2) +O(∆t2).
By direct multiplication, we obtain,
Mn−1Mn−2 =
[ I ∆t (Cn−2 + Cn−1Cn−2)
Bn−1 + Cn−1Bn−2 Cn−1Cn−2 ] + ∆t [ Cn−1Bn−2 0
0 Bn−1Cn−2
] .
Using the definitions in (12) and (8), we can easily see that[ Cn−1Bn−2 0
0 Bn−1Cn−2
] = O(∆t).
Similarly, it is easy to show that
M̃n−1Mn−2,Mn−1M̃n−2 ∼ O(∆t).
Plugging all the above estimates yields,
∂Xn ∂Xn−1 ∂Xn−1 ∂Xn−2 =
[ I ∆t (Cn−2 + Cn−1Cn−2)
Bn−1 + Cn−1Bn−2 Cn−1Cn−2
] +O(∆t2),
which is exactly the form of the leading term (39).
Iterating the above calculations (n− k) times and realizing that (n− k)∆t2 ≈ n∆t2 = tn∆t yields the formula (39).
Recall that we have set θ = Wi,j , for some 1 ≤ i, j ≤ m in proposition 3.3. Directly calculating with (27), (28) and the representation formula (39) yields the formula,
∂E (k) n
∂θ = y>n∆t 2δZi,jm,m(Ak−1)yk−1 + y > n∆t 2δC∗Zi,jm,m(Ak−1)yk−1 +O(∆t3), (40)
with matrix C∗ defined as,
C∗ := n−1∑ j=k k∏ i=j Ci,
and Zi,jm,m(Ak−1) ∈ Rm×m is a matrix with all elements are zero except for the (i, j)-th entry which is set to σ′(aik−1), i.e. the i-th entry of σ ′(Ak−1).
Note that the formula (40) can be explicitly written as,
∂E (k) n
∂θ = δ∆t2σ′(aik−1)y i ny j k−1 + δ∆t 2σ′(aik−1) m∑ `=1 C∗`iy ` ny j k−1 +O(∆t3), (41)
with yjn denoting the j-th element of vector yn, and
aik−1 := m∑ `=1 Wi`y ` k−1 + m∑ `=1 Wi`z ` k−1. (42)
By the assumption (8), we can readily see that
‖W‖∞, ‖W‖∞ ≤ 1 + ∆t. Therefore by the fact that σ′ = sech2, the assumption yik = O( √ tk) and (42), we obtain,
ĉ = sech2( √ k∆t(1 + ∆t) ≤ σ′(ak−1i ) ≤ 1. (43)
Using (43) in (41), we obtain,
δ∆t2σ′(aik−1)y i ny j k−1 = O
( ĉδ∆t 5 2 ) . (44)
Using the definition of Ci, we can expand the product in C∗ and neglect terms of order O(∆t4), to obtain
k∏ i=j Ci = (O(1) +O((j − k + 1)δ∆t2))I.
Summing over j and using the fact that k << n, we obtain that
C∗ = (O(n) +O(δ∆t0))I. (45) Plugging (45) and (43) into (41) leads to,
δ∆t2σ′(aik−1) m∑ `=1 C∗`iy ` ny j k−1 = O ( ĉδ∆t 3 2 ) +O ( ĉδ2∆t 5 2 ) . (46)
Combining (44) and (46) yields the desired estimate (16).
Remark. A careful examination of the above proof reveals that the constants hidden in the prefactors of the leading term O ( ĉδ∆t 3 2 ) of (16) stem from the formula (46). Here, we have used the assumption that yik = O( √ tk). Note that this assumption implicitly assumes that the energy bound (5) is equidistributed among all the elements of the vector yk and results in the obfuscation of the constants in the leading term of (16). Given that the energy bound (5) is too coarse to allow for precise upper and lower bounds on each individual element of the hidden state vector yk, we do not see any other way of, in general, determining the distribution of energy among individual entries of the hidden state vector. Thus, assuming equidistribution seems reasonable. On the other hand, in practice, one has access to all the terms in formula (46) for each numerical experiment and if one is interested, then one can directly evaluate the precise bound on the leading term of the formula (16).
F RIGOROUS ESTIMATES FOR THE RNN (3) WITH n̄ = n− 1 AND GENERAL VALUES OF , γ
In this section, we will provide rigorous estimates, similar to that of propositions 3.1, E.1 and 3.2 for the version of coRNN (3) that results by setting n̄ = n− 1 in (3) leading to,
yn = yn−1 + ∆tzn, zn = zn−1 + ∆tσ (Wyn−1 + Wzn−1 + Vun + b)−∆tγyn−1 −∆t zn−1. (47)
Note that (47) can be equivalently written as, yn = yn−1 + ∆tzn,
zn = (1− ∆t) zn−1 + ∆tσ (Wyn−1 + Wzn−1 + Vun + b)−∆tγyn−1. (48)
We will also consider the case of non-unit values of the control parameters γ and below.
Bounds on Hidden states. We start the following bound on the hidden states of (47),
Proposition F.1 Let the damping parameter > 12 and the time step ∆t in the RNN (47) satisfy the following condition,
∆t < 2 − 1 γ + 2 . (49)
Let yn, zn be the hidden states of the RNN (47) for 1 ≤ n ≤ N , then the hidden states satisfy the following (energy) bounds:
y>n yn + 1
γ z>n zn ≤ mtn γ . (50)
We set An−1 = Wyn−1 +Wzn−1 + Vun−1 + b and as in the proof of proposition 3.1, we multiply (y>n−1, 1 γ z > n ) to (47) and use elementary identities and rearrange terms to obtain,
y>n yn 2 + z>n zn 2γ = y>n−1yn−1 2 + z>n−1zn−1 2γ + (yn − yn−1)>(yn − yn−1) 2
− (zn − zn−1) >(zn − zn−1) 2γ
+ ∆t
γ z>n σ(An−1)−
∆t γ z>n zn + ∆t γ z>n (zn − zn−1) .
We use a rescaled version of the well-known Cauchy’s inequality
ab ≤ ca 2 2 + b2 2c ,
for a constant c > 0 to be determined, to rewrite the above identity as, y>n yn
2 + z>n zn 2γ ≤ y > n−1yn−1 2 + z>n−1zn−1 2γ + (yn − yn−1)>(yn − yn−1) 2
+
( ∆t
2cγ − 1 2γ
) (zn − zn−1)>(zn − zn−1) + ∆t
2γ σ(An−1)
>σ(An−1)
+
( ∆t
2γ + c ∆t 2γ − ∆t γ
) z>n zn.
Using the first equation in (47), the above inequality reduces to,
y>n yn 2 + z>n zn 2γ ≤ y > n−1yn−1 2 + z>n−1zn−1 2γ
+
( ∆t
2cγ − 1 2γ
) (zn − zn−1)>(zn − zn−1) + ∆t
2γ σ(An−1)
>σ(An−1)
+
( ∆t2
2 +
∆t 2γ + c ∆t 2γ − ∆t γ
) z>n zn.
As long as,
∆t ≤ min ( c ,
(2− c) − 1 γ
) , (51)
we can easily check that,
y>n yn 2 + z>n zn 2γ ≤ y > n−1yn−1 2 + z>n−1zn−1 2γ + ∆t 2γ σ(An−1) >σ(An−1)
≤ y > n−1yn−1
2 +
z>n−1zn−1
2γ + m∆t 2γ (σ ≤ 1).
Iterating the above bound till n = 0 and using the zero initial data yields the desired (50) as long as we find a c such that the condition (51) is satisfied. To do so, we equalize the two terms on the right hand side of (51) to obtain,
c = (2 − 1) γ + 2 .
From the assumption (49) and the fact that > 12 , we see that such a c > 0 always exists for any value of γ > 0 and (51) is satisfied, which completes the proof.
We remark that the same bound on the hidden states is obtained for both versions of coRNN, i.e. (3) with n̄ = n and (47). However, the difference lies in the constraint on the time step ∆t. In contrast to (49), a careful examination of the proof of proposition 3.1 reveals that the condition on the time step for the stability of (3) with n̄ = n is given by,
∆t < 2 − 1 γ , (52)
and is clearly less stringent than the condition (51) for the stability of (47). For instance, in the prototypical case of γ = = 1, the stability of (3) with n̄ = n is ensured for any ∆t < 1. On the other hand, the stability of (47) is ensured as long as ∆t < 12 . However, it is essential to recall that these conditions are only sufficient to ensure stability and are by no means necessary. Thus in practice, the coRNN version (47) is found to be stable in the same range of time steps as the version (3) with n̄ = n.
On the exploding and vanishing gradient problems for coRNN (47) Next, we have the following upper bound on the hidden state gradients for the version (47) of coRNN,
Proposition F.2 Let yn, zn be the hidden states generated by the RNN (47). We assume that the damping parameter > 12 and the time step ∆t can be chosen such that in addition to (51) it also satisfies,
max {∆t(γ + ‖W‖∞),∆t‖W‖∞} = η ≤ C̃∆tr, 1
2 ≤ r ≤ 1, (53)
and with the constant C̃ independent of the other parameters of the RNN (47). Then the gradient of the loss function E (6) with respect to any parameter θ ∈ Θ is bounded as,∣∣∣∣∂E∂θ ∣∣∣∣ ≤ 3(C̃) ( m+ Ȳ √ m ) 2γ , (54)
with the constant C̃, defined in (53) and Ȳ = max 1≤n≤N ‖ȳn‖∞ be a bound on the underlying training data
The proof of this proposition is completely analogous to the proof of proposition 3.2 and we omit the details here.
Note that the bound (54) enforces that hidden state gradients cannot explode for version (47) of coRNN. A similar statement for the vanishing gradient problem is inferred from the proposition below.
Proposition F.3 Let yn be the hidden states generated by the RNN (47). Under the assumption that yin = O( √ tn γ ), for all 1 ≤ i ≤ m and (53), the gradient for long-term dependencies satisfies,
∂E (k) n
∂θ = O
( ĉ
γ ∆t
3 2 ) +O ( ĉ
γ δ(1 + δ)∆t
5 2 ) +O(∆t3), ĉ = sech2 (√ k∆t(1 + ∆t) ) k << n.
(55)
The proof is a repetition of the steps of the proof of proposition 3.3, with suitable modifications for the structure of the RNN and non-unit , γ and we omit the tedious calculations here. Note that (55) rules out the vanishing gradient problem for the coRNN version (47). | 1. What is the novel contribution of the paper regarding recurrent neural networks?
2. What are the strengths of the paper, particularly in its proof and experiments?
3. What are the minor objections raised by the reviewer?
4. How does the reviewer assess the expressivity of the proposed architecture?
5. What is the purpose of the exponent r in Eq. 8?
6. Are the reported results representative of different initial seeds? | Review | Review
The paper introduces a novel recurrent neural network architecture which approximately preserves the norm of the gradient irrespective of the number of unroll steps. This complements a rapidly growing line of research that aims to better understand dynamical properties of RNNs and their gradients, thus potentially enabling training of models that capture long term dependencies while avoiding exploding and vanishing gradients.
The submission has the following strengths to it:
It offers a clear and succinct proof of the gradient stability as well as the stability of the forward dynamics.
It provides convincing experiments on relevant datasets, all while showing competitive results.
It's exceedingly well written, and readily understandable even without prior knowledge.
I firmly believe that based on those, it should be accepted. It has all the hallmarks of a good, paper with potentially wide application.
That being said, I do have some minor objections:
When presenting Eq. (1), the 'intuitive' interpretation of
γ
,
ϵ
should given right away, rather than deferred to later sections of the paper, especially the appendix.
The motivation for the use of non-linear oscillators is well-written but perhaps should be de-emphasized. I would like to put-forward the following argument for it. It appears that the choice of the dynamics only constitutes half of the 'puzzle'. The choice of IMEX is mentioned en passant, but seems to be rather crucial to obtaining the theoretical guarantees viz. Proposition 3.2 and 3.3. I did not have time to check the derivation against other schemes, but I presume that the choice of IMEX was highly non-trivial in designing the new architecture. If that is indeed the case, then the role of the solver ought to be emphasized.
I appreciate the authors comments regarding the expressivity of the proposed architecture, as well as the demonstrations in Appendix B. However, I would also appreciate a simple example of the kind of dynamics where the coRNN cell ought to break down -- given the corollary of Proposition 3.1, it would be interesting to show the potential break-down of the network on task that involves approximating a chaotic dynamical systems. In particular, it would be very interesting how coRNN fares against similarly sized gated RNN (LSTM or GRU).
For my own understanding, is the exponent
r
in Eq. 8 there only to conveniently related
η
to
Δ
t
? If not, does it admit some more intuitive interpretation?
Since, the proposed architecture requires a sufficiently small step-size, are the resultant equivalent (in some suitable sense be it topologically, having the same invariant set) to the continuous time dynamics?
Lastly, the authors report the hidden unit dimensionality, but from the main text it is entire unclear whether that's the dimensionality of
y
or
[
y
⊤
,
z
⊤
]
⊤
. Having looked at the code, it appears to be the former.
Edit:
Out of curiosity I ran the submitted code for the permuted sequential MNIST task, and noticed that the following:
The numbers that the authors report in the paper seem to be result of a single network realization. While granted, this is somewhat consistent with practices common in the community, it makes one question how representative they are of different initial seeds. In the current setting it's hard to disambiguate whether the random seed was chosen coincidentally or rather specifically because of the purported state-of-the-art outcome.
For this reason I suggest the authors compute additional iterates of the model and report some distributional information about the best loss/accuracy for all the tasks covered in the submission.
Attaining "state-of-the-art" results is notable, but is by no means pre-requisite for this to be considered a good submission. |
ICLR | Title
Coupled Oscillatory Recurrent Neural Network (coRNN): An accurate and (gradient) stable architecture for learning long time dependencies
Abstract
Circuits of biological neurons, such as in the functional parts of the brain can be modeled as networks of coupled oscillators. Inspired by the ability of these systems to express a rich set of outputs while keeping (gradients of) state variables bounded, we propose a novel architecture for recurrent neural networks. Our proposed RNN is based on a time-discretization of a system of second-order ordinary differential equations, modeling networks of controlled nonlinear oscillators. We prove precise bounds on the gradients of the hidden states, leading to the mitigation of the exploding and vanishing gradient problem for this RNN. Experiments show that the proposed RNN is comparable in performance to the state of the art on a variety of benchmarks, demonstrating the potential of this architecture to provide stable and accurate RNNs for processing complex sequential data.
1 INTRODUCTION
Recurrent neural networks (RNNs) have achieved tremendous success in a variety of tasks involving sequential (time series) inputs and outputs, ranging from speech recognition to computer vision and natural language processing, among others. However, it is well known that training RNNs to process inputs over long time scales (input sequences) is notoriously hard on account of the so-called exploding and vanishing gradient problem (EVGP) (Pascanu et al., 2013), which stems from the fact that the well-established BPTT algorithm for training RNNs requires computing products of gradients (Jacobians) of the underlying hidden states over very long time scales. Consequently, the overall gradient can grow (to infinity) or decay (to zero) exponentially fast with respect to the number of recurrent interactions.
A variety of approaches have been suggested to mitigate the exploding and vanishing gradient problem. These include adding gating mechanisms to the RNN in order to control the flow of information in the network, leading to architectures such as long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997) and gated recurring units (GRU) (Cho et al., 2014), that can overcome the vanishing gradient problem on account of the underlying additive structure. However, the gradients might still explode and learning very long term dependencies remains a challenge (Li et al., 2018). Another popular approach for handling the EVGP is to constrain the structure of underlying recurrent weight matrices by requiring them to be orthogonal (unitary), leading to the so-called orthogonal RNNs (Henaff et al., 2016; Arjovsky et al., 2016; Wisdom et al., 2016; Kerg et al., 2019) and references therein. By construction, the resulting Jacobians have eigen- and singular-spectra with unit norm, alleviating the EVGP. However as pointed out by Kerg et al. (2019), imposing such constraints on the recurrent matrices may lead to a significant loss of expressivity of the RNN resulting in inadequate performance on realistic tasks.
In this article, we adopt a different approach, based on observation that coupled networks of controlled non-linear forced and damped oscillators, that arise in many physical, engineering and biological
systems, such as networks of biological neurons, do seem to ensure expressive representations while constraining the dynamics of state variables and their gradients. This motivates us to propose a novel architecture for RNNs, based on time-discretizations of second-order systems of non-linear ordinary differential equations (ODEs) (1) that model coupled oscillators. Under verifiable hypotheses, we are able to rigorously prove precise bounds on the hidden states of these RNNs and their gradients, enabling a possible solution of the exploding and vanishing gradient problem, while demonstrating through benchmark numerical experiments, that the resulting system still retains sufficient expressivity, i.e. ability to process complex inputs, with a competitive performance, with respect to the state of the art, on a variety of sequential learning tasks.
2 THE PROPOSED RNN
Our proposed RNN is based on the following second-order system of ODEs,
y′′ = σ (Wy + Wy′ + Vu + b)− γy − y′. (1) Here, t ∈ [0, 1] is the (continuous) time variable, u = u(t) ∈ Rd is the time-dependent input signal, y = y(t) ∈ Rm is the hidden state of the RNN with W,W ∈ Rm×m, V ∈ Rm×d are weight matrices, b ∈ Rm is the bias vector and 0 < γ, are parameters, representing oscillation frequency and the amount of damping (friction) in the system, respectively. σ : R 7→ R is the activation function, set to σ(u) = tanh(u) here. By introducing the so-called velocity variable z = y′(t) ∈ Rm, we rewrite (1) as the first-order system:
y′ = z, z′ = σ (Wy + Wz + Vu + b)− γy − z. (2) We fix a timestep 0 < ∆t < 1 and define our proposed RNN hidden states at time tn = n∆t ∈ [0, 1] (while omitting the affine output state) as the following IMEX (implicit-explicit) discretization of the first order system (2):
yn = yn−1 + ∆tzn, zn = zn−1 + ∆tσ (Wyn−1 + Wzn−1 + Vun + b)−∆tγyn−1 −∆t zn̄, (3)
with either n̄ = n or n̄ = n− 1. Note that the only difference in the two versions of the RNN (3) lies in the implicit (n̄ = n) or explicit (n̄ = n− 1) treatment of the damping term − z in (2), whereas both versions retain the implicit treatment of the first equation in (2).
Motivation and background. To see that the underlying ODE (2) models a coupled network of controlled forced and damped nonlinear oscillators, we start with the single neuron (scalar) case by setting d = m = 1 in (1) and assume an identity activation function σ(x) = x. Setting W = W = V = b = = 0 leads to the simple ODE, y′′ + γy = 0, which exactly models simple harmonic motion with frequency γ, for instance that of a mass attached to a spring (Guckenheimer & Holmes, 1990). Letting > 0 in (1) adds damping or friction to the system (Guckenheimer & Holmes, 1990). Then, by introducing non-zero V in (1), we drive the system with a driving force proportional to the input signal u(t). The parameters V,b modulate the effect of the driving force, W controls the frequency of oscillations and W the amount of damping in the system. Finally, the tanh activation mediates a non-linear response in the oscillator. In the coupled network (2) with m > 1, each neuron updates its hidden state based on the input signal as well as information from other neurons. The diagonal entries of W (and the scalar hyperparameter γ) control the frequency whereas the diagonal entries of W (and the hyperparameter ) determine the amount of damping for each neuron, respectively, whereas the non-diagonal entries of these matrices modulate interactions between neurons. Hence, given this behavior of the underlying ODE (2), we term the RNN (3) as a coupled oscillatory Recurrent Neural Network (coRNN).
The dynamics of the ODE (2) (and the RNN (3)) for a single neuron are relatively straightforward. As we illustrate in Fig. 6 of supplementary material SM§C, input signals drive the generation of (superpositions of) oscillatory wave-forms, whose amplitude and (multiple) frequencies are controlled by the tunable parameters W,W,V,b. Adding a tanh activation does not change these dynamics much. This is in contrast to truncating tanh to leading non-linear order by setting σ(x) = x− x3/3, which yields a Duffing type oscillator that is characterized by chaotic behavior (Guckenheimer & Holmes, 1990). Adding interactions between neurons leads to further accentuation of this generation of superposed wave forms (see Fig. 6 in SM§C) and even with very simple network topologies, one
sees the emergence of non-trivial non-oscillatory hidden states from oscillatory inputs. In practice, a network of a large number of neurons is used and can lead to extremely rich global dynamics. Hence, we argue that the ability of a network of (forced, driven) oscillators to access a very rich set of output states may lead to high expressivity of the system, allowing it to approximate outputs from complicated sequential inputs.
Oscillator networks are ubiquitous in nature and in engineering systems (Guckenheimer & Holmes, 1990; Strogatz, 2015) with canonical examples being pendulums (classical mechanics), business cycles (economics), heartbeat (biology) for single oscillators and electrical circuits for networks of oscillators. Our motivating examples arise in neurobiology, where individual biological neurons can be viewed as oscillators with periodic spiking and firing of the action potential. Moreover, functional circuits of the brain, such as cortical columns and prefrontal-striatal-hippocampal circuits, are being increasingly interpreted by networks of oscillatory neurons, see Stiefel & Ermentrout (2016) for an overview. Following well-established paths in machine learning, such as for convolutional neural networks (LeCun et al., 2015), our focus here is to abstract the essence of functional brain circuits being networks of oscillators and design an RNN based on much simpler mechanistic systems, such as those modeled by (2), while ignoring the complicated biological details of neural function.
Related work. There is an increasing trend of basing RNN architectures on ODEs and dynamical systems. These approaches can roughly be classified into two branches, namely RNNs based on discretized ODEs and continuous-time RNNs. Examples of continuous-time approaches include neural ODEs (Chen et al., 2018) with ODE-RNNs (Rubanova et al., 2019) as its recurrent extension as well as E (2017) and references therein, to name just a few. We focus, however, in this article on an ODE-inspired discrete-time RNN, as the proposed coRNN is derived from a discretization of the ODE (1). A good example for a discrete-time ODE-based RNNs is the so-called anti-symmetric RNN of Chang et al. (2019), where the RNN architecture is based on a stable ODE resulting from a skew-symmetric hidden weight matrix, thus constraining the stable (gradient) dynamics of the network. This approach has much in common with previously mentioned unitary/orthogonal/nonnormal RNNs in constraining the structure of the hidden-to-hidden layer weight matrices. However, adding such strong constraints might reduce expressivity of the resulting RNN and might lead to inadequate performance on complex tasks. In contrast to these approaches, our proposed coRNN does not explicitly constrain the weight matrices but relies on the dynamics of the underlying ODE (and the IMEX discretization (3)), to provide gradient stability. Moreover, no gating mechanisms as in LSTMs/GRUs are used in the current version of coRNN. There is also an increasing interest in designing hybrid methods, which use a discretization of an ODE (in particular a Hamiltonian system) in order to learn the continuous representation of the data, see for instance Greydanus et al. (2019); Chen et al. (2020). Overall, our approach here differs from these papers in our use of networks of oscillators to build the RNN.
3 RIGOROUS ANALYSIS OF THE PROPOSED RNN
An attractive feature of the underlying ODE system (2) lies in the fact that the resulting hidden states (and their gradients) are bounded (see SM§D for precise statements and proofs). Hence, one can expect that a suitable discretization of the ODE (2) that preserves these bounds will not have exploding gradients. We claim that one such structure preserving discretization is given by the IMEX discretization that results in the RNN (3) and proceed to derive bounds on this RNN below.
Following standard practice we set y(0) = z(0) = 0 and purely for the simplicity of exposition, we set the control parameters, = γ = 1 and n̄ = n in (3) leading to,
yn = yn−1 + ∆tzn, zn = zn−1 1+∆t + ∆t 1+∆tσ(An−1)− ∆t1+∆tyn−1, An−1 := Wyn−1 + Wzn−1 + Vun + b. (4)
Analogous results and proofs for the case where n̄ = n− 1 and for general values of , γ are provided in SM§F.
Bounds on the hidden states. As with the underlying ODE (2), the hidden states of the RNN (3) are bounded, i.e.
Proposition 3.1 Let yn, zn be the hidden states of the RNN (4) for 1 ≤ n ≤ N , then the hidden states satisfy the following (energy) bounds:
y>n yn + z > n zn ≤ nm∆t = mtn ≤ m. (5)
The proof of the energy bound (5) is provided in SM§E.1 and a straightforward variant of the proof (see SM§E.2) yields an estimate on the sensitivity of the hidden states to changing inputs. As with the underlying ODE (see SM§D) , this bound rules out chaotic behavior of hidden states.
Bounds on hidden state gradients. We train the RNN (3) to minimize the loss function,
E := 1
N N∑ n=1 En, En = 1 2 ‖yn − ȳn‖22, (6)
with ȳ being the underlying ground truth (training data). During training, we compute gradients of the loss function (6) with respect to the weights and biases Θ = [W,W,V,b], i.e.
∂E ∂θ = 1 N N∑ n=1 ∂En ∂θ , ∀ θ ∈ Θ. (7)
Proposition 3.2 Let yn, zn be the hidden states generated by the RNN (4). We assume that the time step ∆t << 1 can be chosen such that,
max
{ ∆t(1 + ‖W‖∞)
1 + ∆t , ∆t‖W‖∞ 1 + ∆t
} = η ≤ ∆tr, 1
2 ≤ r ≤ 1. (8)
Denoting δ = 11+∆t , the gradient of the loss function E (6) with respect to any parameter θ ∈ Θ is bounded as, ∣∣∣∣∂E∂θ ∣∣∣∣ ≤ 32 (m+ Ȳ√m) , (9) with Ȳ = max
1≤n≤N ‖ȳn‖∞ be a bound on the underlying training data.
Sketch of the proof. Denoting Xn = [yn, zn], we can apply the chain rule repeatedly (for instance as in Pascanu et al. (2013)) to obtain,
∂En ∂θ
= ∑
1≤k≤n
∂En ∂Xn ∂Xn ∂Xk ∂+Xk ∂θ︸ ︷︷ ︸
∂E (k) n ∂θ
. (10)
Here, the notation ∂ +Xk ∂θ refers to taking the partial derivative of Xk with respect to the parameter θ, while keeping the other arguments constant. This quantity can be readily calculated from the structure of the RNN (4) and is presented in the detailed proof provided in SM§E.3. From (6), we can directly compute that ∂En∂Xn = [yn − ȳn, 0] . Repeated application of the chain rule and a direct calculation with (4) yields,
∂Xn ∂Xk
= ∏
k<i≤n
∂Xi ∂Xi−1 , ∂Xi ∂Xi−1 =
[ I + ∆tBi−1 ∆tCi−1
Bi−1 Ci−1
] , (11)
where I is the identity matrix and
Bi−1 = δ∆t (diag(σ ′(Ai−1))W − I) , Ci−1 = δ (I + ∆tdiag(σ′(Ai−1))W) . (12)
It is straightforward to calculate using the assumption (8) that ‖Bi−1‖∞ < η and ‖Ci−1‖∞ ≤ η+ δ. Using the definitions of matrix norms and (8), we obtain:∥∥∥∥ ∂Xi∂Xi−1 ∥∥∥∥ ∞ ≤ max (1 + ∆t(‖Bi−1‖∞ + ‖Ci−1‖∞), ‖Bi−1‖∞ + ‖Ci−1‖∞)
≤ max (1 + ∆t(δ + 2η), δ + 2η) ≤ 1 + 3∆tr. (13)
Therefore, using (11), we have∥∥∥∥∂Xn∂Xk ∥∥∥∥ ∞ ≤ ∏ k<i≤n ∥∥∥∥ ∂Xi∂Xi−1 ∥∥∥∥ ∞ ≤ (1 + 3∆tr)n−k ≈ 1 + 3(n− k)∆tr. (14)
Note that we have used an expansion around 1 and neglected terms of O(∆t2r) as ∆t << 1. We remark that the bound (13) is the crux of our argument about gradient control as we see from the structure of the RNN that the recurrent matrices have close to unit norm. The detailed proof is presented in SM§E.3. As the entire gradient of the loss function (6), with respect to the weights and biases of the network, is bounded above in (9), the exploding gradient problem is mitigated for this RNN.
On the vanishing gradient problem. The vanishing gradient problem (Pascanu et al., 2013) arises if ∣∣∣∂E(k)n∂θ ∣∣∣, defined in (10),→ 0 exponentially fast in k, for k << n (long-term dependencies). In that case, the RNN does not have long-term memory, as the contribution of the k-th hidden state to error at time step tn is infinitesimally small. We already see from (14) that
∥∥∥∂Xn∂Xk ∥∥∥∞ ≈ 1 (independently of k). Thus, we should not expect the products in (10) to decay fast. In fact, we will provide a much more precise characterization of this gradient. To this end, we introduce the following order-notation,
β = O(α), for α, β ∈ R+ if there exists constants C,C such that Cα ≤ β ≤ Cα. M = O(α), for M ∈ Rd1×d2 , α ∈ R+ if there exists constant C such that ‖M‖ ≤ Cα. (15)
For simplicity of notation, we will also set ȳn = un ≡ 0, for all n, b = 0 and r = 1 in (8) and we will only consider θ = Wi,j for some 1 ≤ i, j ≤ m in the following proposition.
Proposition 3.3 Let yn be the hidden states generated by the RNN (4). Under the assumption that yin = O( √ tn), for all 1 ≤ i ≤ m and (8), the gradient for long-term dependencies satisfies,
∂E (k) n
∂θ = O
( ĉδ∆t 3 2 ) +O ( ĉδ(1 + δ)∆t 5 2 ) +O(∆t3), ĉ = sech2 (√ k∆t(1 + ∆t) ) , k << n.
(16)
This precise bound (16) on the gradient shows that although the gradient can be small, i.e O(∆t 32 ), it is in fact independent of k, ensuring that long-term dependencies contribute to gradients at much later steps and mitigating the vanishing gradient problem. The detailed proof is presented in SM§E.5.
Summarizing, we see that the RNN (3) indeed satisfied similar bounds to the underlying ODE (2) that resulted in upper bounds on the hidden states and its gradients. However, the lower bound on the gradient (16) is due to the specific choice of this discretization and does not appear to have a continuous analogue, making the specific choice of discretization of (2) crucial for mitigating the vanishing gradient problem.
4 EXPERIMENTS
We present results on a variety of learning tasks with coRNN (3) with n̄ = n − 1, as this version resulted in marginally better performance than the version with n̄ = n. Details of the training procedure for each experiment can be found in SM§B. We wish to clarify here that we use a straightforward hyperparameter tuning protocol based on a validation set and do not use additional performance enhancing tools, such as dropout (Srivastava et al., 2014), gradient clipping (Pascanu et al., 2013) or batch normalization (Ioffe & Szegedy, 2015), which might further improve the performance of coRNNs.
Adding problem. We start with the well-known adding problem (Hochreiter & Schmidhuber, 1997), proposed to test the ability of an RNN to learn (very) long-term dependencies. The input is a two-dimensional sequence of length T , with the first dimension consisting of random numbers drawn from U([0, 1]) and with two non-zero entries (both set to 1) in the second dimension, chosen at random locations, but one each in both halves of the sequence. The output is the sum of two numbers
0 100 200 300 400 500
Training steps (hundreds)
0.000
0.025
0.050
0.075
0.100
0.125
0.150
0.175
0.200
M S
E
T = 500
0 100 200 300 400 500
Training steps (hundreds)
0.000
0.025
0.050
0.075
0.100
0.125
0.150
0.175
0.200
M S
E
T = 2000
coRNN expRNN FastRNN anti.sym. RNN tanh RNN Baseline
0 100 200 300 400 500
Training steps (hundreds)
0.000
0.025
0.050
0.075
0.100
0.125
0.150
0.175
0.200
M S
E
T = 5000
Figure 1: Results of the adding problem for coRNN, expRNN, FastRNN, anti.sym. RNN and tanh RNN based on three different sequence lengths T , i.e. T = 500, T = 2000 and T = 5000.
of the first dimension at positions, corresponding to the two 1 entries in the second dimension. We compare the proposed coRNN to three recently proposed RNNs, which were explicitly designed to learn LTDs, namely the FastRNN (Kusupati et al., 2018), the antisymmetric (anti.sym.) RNN (Chang et al., 2019) and the expRNN (Lezcano-Casado & Martínez-Rubio, 2019), and to a plain vanilla tanh RNN, with the goal of beating the baseline mean square error (MSE) of 0.167 (which stems from the variance of the baseline output 1). All methods have 128 hidden units (dimensionality of the hidden state y) and the same training protocol is used in all cases. Fig. 1 shows the results for different lengths T of the input sequences. We can see that while the tanh RNN is not able to beat the baseline for any sequence length, the other methods successfully learn the adding task for T = 500. However, in this case, coRNN converges significantly faster and reaches a lower test MSE than other tested methods. When setting the length to the much more challenging case of T = 2000, we see that only coRNN and the expRNN beat the baseline. However, the expRNN fails to reach a desired test MSE of 0.01 within training time. In order to further demonstrate the superiority of coRNN over recently proposed RNN architectures for learning LTDs, we consider the adding problem for T = 5000 and observe that coRNN converges very quickly even in this case, while expRNN fails to consistently beat the baseline. We thus conclude that the coRNN mitigates the vanishing/exploding gradient problem even for very long sequences.
Sequential (permuted) MNIST. Sequential MNIST (sMNIST) (Le et al., 2015) is a benchmark for RNNs, in which the model is required to classify an MNIST (LeCun et al., 1998) digit one pixel at a time leading to a classification task with a sequence length of T = 784. In permuted sequential MNIST (psMNIST), a fixed random permutation is applied in order to increase the time-delay between interdependent pixels and to make the problem harder. In Table 1, we compare the test accuracy for coRNN on sMNIST and psMNIST with recently published best case results for other recurrent models, which were explicitly designed to solve long-term dependencies together with baselines corresponding to gated and unitary RNNs. To the best of our knowledge the proposed coRNN outperforms all single-layer recurrent architectures, published in the literature, for both the sMNIST and psMNIST. Moreover in Fig. 2, we present the performance (with respect to number of epochs) of different RNN architectures for psMNIST with the same fixed random permutation and the
same number of hidden units, i.e. 128. As seen from this figure, coRNN clearly outperforms the other architectures, some of which were explicitly designed to learn LTDs, handily for this permutation.
Noise padded CIFAR-10. Another challenging test problem for learning LTDs is the recently proposed noise padded CIFAR-10 experiment by Chang et al. (2019), in which CIFAR-10 data points (Krizhevsky et al., 2009) are fed to the RNN row-wise and flattened along the channels resulting in sequences of length 32. To test the long term memory, entries of uniform random numbers are added such that the resulting sequences have a length of 1000, i.e. the last 968 entries of each sequence are only noise to distract the network. Table 2 shows the result for coRNN together with other recently published best case results. We observe that coRNN readily outperforms other RNN architectures on this benchmark, while requiring only 128 hidden units.
Human activity recognition. This experiment is based on the human activity recognition data set provided by Anguita et al. (2012). The data set is a collection of tracked human activities, which were measured by an accelerometer and gyroscope on a Samsung Galaxy S3 smartphone. Six activities were binarized to obtain two merged classes {Sitting, Laying, Walking_Upstairs} and {Standing, Walking, Walking_Downstairs}, leading to the HAR-2 data set, which was first proposed in Kusupati et al. (2018). Table 3 shows the result for coRNN together with other very recently published best case results on the same data set. We can see that coRNN readily outperforms all other methods. We also ran this experiment on a tiny coRNN with very few parameters, i.e. only 1k. We can see that even in this case, the tiny coRNN beats all baselines. We thus conclude that coRNN can efficiently be used on resource-constrained IoT micro-controllers.
IMDB sentiment analysis. The IMDB data set (Maas et al., 2011) is a collection of 50k movie reviews, where 25k reviews are used for training (with 7.5k of these reviews used for validating) and 25k reviews are used for testing. The aim of this binary sentiment classification task is to decide whether a movie review is positive or negative. We follow the standard procedure by initializing the word embedding with pretrained 100d GloVe (Pennington et al., 2014) vectors and restrict the
dictionary to 25k words. Table 4 shows the results for coRNN and other recently published models, which are trained similarly and have the same number of hidden units, i.e. 128. We can see that coRNN compares favorable with gated baselines (which are known to perform very well on this task), while at the same time requiring significantly less parameters.
Further experimental results. To shed further light on the performance of coRNN, we consider the following issues. First, the theory suggested that coRNN mitigates the exploding/vanishing gradient problem as long as the assumptions (8) on the time step ∆t and weight matrices W,W hold. Clearly one can choose a suitable ∆t to enforce (8) before training, but do these assumptions remain valid during training? In SM§E.4, we argue, based on worst-case estimates, that the assumptions will remain valid for possibly a large number of training steps. More pertinently, we can verify experimentally that (8) holds during training. This is demonstrated in Fig. 3, where we show that (8) holds for all LTD tasks during training. Thus, the presented theory applies and one can expect control over hidden state gradients with coRNN. Next, we recall that the frequency parameter γ and damping parameter play a role for coRNNs (see SM§F for the theoretical dependence and Table 8 for best performing values of , γ for each numerical experiment within the range considered in Table 7). How sensitive is the performance of coRNN to the choice of these 2 parameters? To investigate this dependence, we focus on the noise padded CIFAR-10 experiment and show the results of an ablation study in Fig. 4, where the test accuracy for different coRNNs based on a two dimensional hyperparameter grid ( , γ) ∈ [0.8, 1.8]× [5.7, 17, 7] (i.e., sufficiently large intervals around the best performing values of , γ from Table 8) is plotted. We observe from the figure that although there are reductions in test accuracy for non-optimal values of ( , γ), there is no large variation and the performance is rather robust with respect to these hyperparameters. Finally, note that we follow standard practice and present best reported results with coRNN as well as other competing RNNs in order to compare the relative performance. However, it is natural to investigate the dependence of these best results on the random initial (before training) values of the weight matrices. To this end, in Table 5 of SM, we report the mean and standard deviation (over 10 retrainings) of the test accuracy with coRNN on various learning tasks and find that the mean value is comparable to the best reported value, with low standard deviations. This indicates further robustness of the performance of coRNNs.
5 DISCUSSION
Inspired by many models in physics, biology and engineering, we proposed a novel RNN architecture (3) based on a model (1) of a network of controlled forced and damped oscillators. For this RNN, we rigorously showed that under verifiable hypotheses on the time step and weight matrices, the hidden states are bounded (5) and obtained precise bounds on the gradients (Jacobians) of the hidden states, (9) and (16). Thus by design, this architecture can mitigate the exploding and vanishing gradient problem (EVGP) for RNNs. We present a series of numerical experiments that include sequential image classification, activity recognition and sentiment analysis, to demonstrate that the proposed coRNN keeps hidden states and their gradients under control, while retaining sufficient expressivity to perform complex tasks. Thus, we provide a novel and promising strategy for designing RNN architectures that are motivated by the functioning of natural systems, have rigorous bounds on hidden state gradients and are robust, accurate, straightforward to train and cheap to evaluate.
This work can be extended in different directions. For instance in this article, we have mainly focused on the learning of tasks with long-term dependencies and observed that coRNNs are comparable in performance to the best published results in the literature. Given that coRNNs are built with networks of oscillators, it is natural to expect that they will perform very well on tasks with oscillatory inputs/outputs, such as the time series analysis of high-resolution biomedical data, for instance EEG (electroencephalography) and EMG (electromyography) data and seismic activity data from geoscience. This will be pursued in a follow-up article. Similarly, applications of coRNN to language modeling will be covered in future work.
However, it is essential to point out that coRNNs might not be suitable for every learning task involving sequential inputs/outputs. As a concrete example, we consider the problem of predicting time series corresponding to a chaotic dynamical system. We recall that by construction, the underlying ODE (2) (and the discretization (3)) do not allow for super-linear (in time) separation of trajectories for nearby inputs. Thus, we cannot expect that coRNNs will be effective at predicting chaotic time series and it is indeed investigated and demonstrated for a Lorenz-96 ODE in SM§A, where we observe that the coRNN is outperformed by LSTMs in the chaotic regime.
Our main theoretical focus in this paper was to demonstrate the possible mitigation of the exploding and vanishing gradient problem. On the other hand, we only provided some heuristics and numerical evidence on why the proposed RNN still has sufficient expressivity. A priori, it is natural to think that the proposed RNN architecture might introduce a strong bias towards oscillatory functions. However, as we argue in SM§C, the proposed coRNN can be significantly more expressive, as the damping, forcing and coupling of several oscillators modulates nonlinear response to yield a very rich and diverse set of output states. This is also evidenced by the ability of coRNNs to deal with many tasks in our numerical experiments, which do not have an explicit oscillatory structure. This sets the stage for a rigorous investigation of universality of the proposed coRNN architecture, as in the case of echo state networks in Grigoryeva & Ortega (2018). A possible approach would be to leverage the ability of the proposed RNN to convert general inputs into a rich set of superpositions of harmonics (oscillatory wave forms). Moreover, the proposed RNN was based on the simplest model of coupled oscillators (1). Much more detailed models of oscillators are available, particularly those that arise in the modeling of biological neurons, Stiefel & Ermentrout (2016) and references therein. An interesting variant of our proposed RNN would be to base the RNN architecture on these more elaborate models, resulting in analogues of the spiking neurons model of Maass (2001) for RNNs.
Supplementary Material for:
Coupled Oscillatory Recurrent Neural Network (coRNN): An accurate and (gradient) stable
architecture for learning long time dependencies
A CHAOTIC TIME-SERIES PREDICTION.
According to proposition E.1, coRNN does not exhibit chaotic behavior by design. While this property is highly desirable for learning long-term dependencies (a slight perturbation of the input should not result in an unbounded perturbation of the prediction), it impairs the performance on tasks, where the network has to learn actual chaotic dynamics. To test this numerically, we consider the following version of the Lorenz 96 system: (Lorenz, 1996):
x′j = (xi+1 − xi−2)xi−1 − xi + F, (17)
where xj ∈ R for all j = 1, . . . , 5 and F is an external force controlling the level of chaos in the system. Fig. 5 shows a trajectory of the system (17) plotted on the x1x2-plane for a small external
force of F = 0.9 as well as a trajectory for a large external force of F = 8. We can see that while for F = 0.9 the system does not exhibit chaotic behavior, the dynamics for F = 8 is already highly chaotic.
Our task consists of predicting the 25-th next state of a trajectory of the system (17). We provide 128 trajectories of length 2000 for each of the training, validation and test sets. The trajectories are generated by numerically solving the system (17) and evaluating it at 2000 equidistantly distributed discrete time points with distance 0.01. The initial value for each trajectory is chosen uniform at random on [F − 1/2, F + 1/2]5 around the equilibrium point (F, . . . , F ) of the system (17). Since LSTMs are known to be able to produce chaotic dynamics, even in the autonomous (zero-entry) case (Laurent & von Brecht, 2017), we expect them to perform significantly better than coRNN if the underlying system exhibits strong chaotic behavior. Table 6 shows the normalized root mean square error (NRMSE) (RMSE divided by the root mean square of the target trajectory) on the test set for coRNN and LSTM. We can see that indeed for the non-chaotic case of using an external force of F = 0.9 LSTM and coRNN perform similarly. However, when the dynamics get chaotic (in this case using an external force of F = 8), the LSTM clearly outperforms coRNN.
B TRAINING DETAILS
The IMDB task was conducted on an NVIDIA GeForce GTX 1080 Ti GPU, while all other experiments were run on a Intel Xeon E3-1585Lv5 CPU. The weights and biases of coRNN are randomly initialized according to U(− 1√nin , 1√ nin
), where nin denotes the input dimension of each affine transformation. Instead of treating the parameters ∆t, γ and as fixed hyperparameters, we can also treat them as trainable network parameters by constraining ∆t to [0, 1] by using a sigmoidal activation function and , γ > 0 by the use of ReLU for instance. However, in this case no major difference in performance is obtained. The hyperparameters are optimized with a random search algorithm, where the results of the best performing coRNN (based on the validation set) are reported. The ranges of the hyperparameters for the random search algorithm are provided in Table 7. Table 8 shows the rounded hyperparameters of the best performing coRNN architecture resulting from the random search algorithm for each learning task. We used 100 training epochs for sMNIST, psMNIST and noise padded CIFAR-10 with additional 20 epochs in which the learning rate was reduced by a factor of 10. Additionally, we used 100 epochs for the IMDB task and 250 epochs for the HAR-2 task.
C HEURISTICS OF NETWORK FUNCTION
At the level of a single neuron, the dynamics of the RNN is relatively straightforward. We start with the scalar case, i.e. m = d = 1 and illustrate different hidden states y as a function of time, for different input signals, in Fig. 6. In this figure, we consider two different input signals, one oscillatory signal given by u(t) = cos(4t) and another is a combination of step functions. First, we plot the solution y(t) of (1), with the parameters V,b,W,W, = 0 and γ = 1. This simply corresponds to the case of a simple harmonic oscillator (SHO) and the solution is described by a sine wave with the natural frequency of the oscillator. Next, we introduce forcing by the input signal by setting V = 1 and the activation function is the identity σ(x) = x, leading to a forced damped oscillator (FDO). As seen from Fig. 6, in the case of an oscillatory signal, this leads to a very minor change over the SHO,
whereas for the step function, the change is only in the amplitude of the wave. Next, we add damping by setting = 0.25 and see that the resulting forced damped oscillator (FDO), merely damps the amplitude of the waves, without changing their frequency. Then, we consider the case of controlled oscillator (CFDO) by setting W = −2,V = 2,b = 0.25,W = 0.75. As seen from Fig. 6, this leads to a significant change in the wave form in both cases. For the oscillatory input, the output is now a superposition of many different forms, with different amplitudes and frequencies (phases) whereas for the step function input, the phase is shifted. Already, we can see that for a linear controlled oscillator, the output can be very complicated with the superposition of different waves. This holds true when the activation function is set to σ(x) = tanh(x) (which is our proposed coRNN). For both inputs, the output is a modulated version of the one generated by CFDO, expressed as a superposition of waves. On the other hand, we also plot the solution with a Duffing type oscillator (DUFF) by setting the activation function as,
σ(x) = x− x 3
3 . (18)
In this case, the solution is very different from the CFDO and coRNN solutions and is heavily damped (either in the output or its derivative). On the other hand, given the chaotic nature of the dynamical system in this case, a slight change in the parameters led to the output blowing up. Thus, a bounded nonlinearity seems essential in this context.
Coupling neurons together further accentuates this generation of superpositions of different waveforms, as seen even with the simplest case of a network with two neurons, shown in Fig. 6 (Bottom row). For this figure, we consider two neurons, i.e m = 2 and two different network topologies. For the first, we only allow the first neuron to influence the second one and not vice versa. This is enforced with the weight matrices,
W = [ −2 0 3 −2 ] , W = [ 0.75 0 −1 0.75 ] .
We also set V = [2, 2]>,b = [0.25, 0.25]>. Note that in this case (we name as ORD (for ordered connections)), the output of the first neuron should be exactly the same as in the uncoupled (UC) case, whereas there is a distinct change in the output of the second neuron and we see that the first neuron has modulated a sharp change in the resulting output wave form. It is well illustrated by the emergence of an approximation to the step function (Bottom Right of Fig. 6), even though the input signal is oscillatory.
Next, we consider the case of fully connected (FC) neurons by setting the weight matrices as,
W = [ −2 1 3 −2 ] , W = [ 0.75 0.3 −1 0.75 ] .
The resulting outputs for the first neuron are now slightly different from the uncoupled case. On the the other hand, the approximation of step function output for the second neuron is further accentuated.
Even these simple examples illustrate the functioning of a network of controlled oscillators well. The input signal is converted into a superposition of waves with different frequencies and amplitudes, with these quantities being controlled by the weights and biases in (1). Thus, very complicated outputs can be generated by modulating the number, frequencies and amplitudes of the waves. In practice, a network of a large number of neurons is used and can lead to extremely rich global dynamics, along the lines of emergence of synchronization or bistable heterogeneous behavior seen in systems of idealized oscillators and explained by their mean field limit, see H. Sakaguchi & Kuramoto (1987); Winfree (1967); Strogatz (2001). Thus, we argue that the ability of the network of (forced, driven) oscillators to access a very rich set of output states can lead to high expressivity of the system. The training process selects the weights that modulate frequencies, phases and amplitudes of individual neurons and their interaction to guide the system to its target output.
D BOUNDS ON THE DYNAMICS OF THE ORDINARY DIFFERENTIAL EQUATION
(1)
In this section, we present bounds that show how the continuous time dynamics of the ordinary differential equation (2), modeling non-linear damped and forced networks of oscillators, is constrained. We start with the following estimate on the energy of the solutions of the system (2).
Proposition D.1 Let y(t), z(t) be the solutions of the ODE system (2) at any time t ∈ [0, T ] and assume that the damping parameter ≥ 12 and the initial data for (2) is given by, y(0) = z(0) ≡ 0. Then, the solutions are bounded as,
y(t)>y(t) ≤ mt γ , z(t)>z(t) ≤ mt, ∀t ∈ (0, T ]. (19)
To prove this proposition, we multiply the first equation in (2) with y(t)> and the second equation in (2) with 1γ z(t) > to obtain,
d
dt
( y(t)>y(t)
2 +
z(t)>z(t)
2γ
) = z(t)>σ(A(t))
γ − γ z(t)>z(t), (20)
with A(t) = Wy(t) + Wz(t) + Vu(t) + b.
Using the elementary Cauchy’s inequality repeatedly in (20) results in,
d
dt
( y(t)>y(t)
2 +
z(t)>z(t)
2γ
) ≤ σ(A) >σ(A)
2γ +
1
γ
( 1 2 − ) z>z
≤ m 2γ (as |σ| ≤ 1 and ≥ 1 2 ).
Integrating the above inequality over the time interval [0, t] and using the fact that the initial data are y(0) = z(0) ≡ 0, we obtain the bounds (19). The above proposition and estimate (19) clearly demonstrate that the dynamics of the network of coupled non-linear oscillators (1) is bounded. The fact that the nonlinear activation function σ = tanh is uniformly bounded in its arguments played a crucial role in deriving the energy bound (19). A straightforward adaptation of this argument leads to the following proposition about the sensitivity of the system to inputs,
Proposition D.2 Let y(t), z(t) be the solutions of the ODE system (2) with respect to the input signal u(t). Let ȳ(t), z̄(t) be the solutions of the ODE system (2), but with respect to the input signal ū(t). Assume that the damping parameter ≥ 12 and the initial data are given by,
y(0) = z(0) = ȳ(0) = z̄(0) ≡ 0. Then we have the following bound,
(y(t)− ȳ(t))> (y(t)− ȳ(t)) ≤ 4mt γ , (z(t)− z̄(t))> (z(t)− z̄(t)) ≤ 4mt, ∀t ∈ (0, T ].
(21)
Thus from the bound (21), there can be atmost linear separation (in time) with respect to the trajectories of the ODE (2) for different input signals. Hence, chaotic behavior, which is characterized by the (super-)exponential separation of trajectories is ruled out by the structure of the ODE system (2). Note that this property of the ODE system was primarily a result of the uniform boundedness of the activation function σ. Using a different activation function such as ReLU might enable to obtain an exponential separation of trajectories that is a prerequisite for a chaotic dynamical system.
D.1 GRADIENT DYNAMICS FOR THE ODE SYSTEM (2)
Let θ denote the i, j-th entry of the Weight matrices W,W,V or the i-th entry of the bias vector b. We are interested in finding out how the gradients of the hidden state y (and the auxiliary hidden state z) with respect to parameter θ, vary with time. Note that these gradients are precisely the objects of interest in the training of an RNN, based on a discretization of the ODE system (2). To this end, we differentiate (2) with respect to the parameter θ and denote
yθ(t) = ∂y
∂θ (t), zθ(t) =
∂z ∂θ (t),
to obtain, y′θ = zθ,
z′θ = diag(σ ′(A)) [Wyθ + Wzθ] + Z i,j m,m̄(A)ρ− γyθ − zθ.
(22)
As introduced before, Zi,jm,m̄(A) ∈ Rm×m̄ is a matrix with all elements are zero except for the (i, j)-th entry which is set to σ′(A(t))i, i.e. the i-th entry of σ′(A), and we have,
ρ = y, m̄ = m, if θ = Wi,j ,
ρ = z, m̄ = m, if θ = Wi,j ,
ρ = u, m̄ = d, if θ = Vi,j ,
ρ = 1, m̄ = 1, if θ = bi.
We see from (22) that the ODEs governing the gradients with respect to the parameter θ also represent a system of oscillators but with additional coupling and forcing terms, proportional to the hidden states y, z or input signal u. As we have already proved with estimate (19) that the hidden states are always bounded and the input signal is assumed to be bounded, it is natural to expect that the gradients of the states with respect to θ are also bounded. We make this statement explicit in the following proposition, which for simplicity of exposition, we consider the case of θ = Wi,j , as the other values of θ are very similar in their behavior.
Proposition D.3 Let θ = Wi,j and y, z be the solutions of the ODE system (2). Assume that the weights and the damping parameter satisfy,
‖W‖∞ + ‖W‖∞ ≤ , then we have the following bounds on the gradients,
yθ(t) >yθ(t) +
1
γ
( zθ(t) >zθ(t) ) ≤ [ yθ(0) >yθ(0) + 1
γ
( zθ(0) >zθ(0) )] eCt + mt2
2γ2 , t ∈ (0, T ],
C = max {‖W‖1 γ , 1 + ‖W‖1 } .
(23)
The proof of this proposition follows exactly along the same lines as the proof of proposition D.1 and we skip the details, while noting the crucial role played by the energy bound (19).
We remark that the bound (23) indicates that as long as the initial gradients with respect to θ are bounded and the weights are controlled by the damping parameter, the hidden state gradients remain bounded in time.
E SUPPLEMENT TO THE RIGOROUS ANALYSIS OF CORNN
In this section, we supplement the section on the rigorous analysis of the proposed RNN (4). We start with
E.1 PROOF OF PROPOSITION 3.1
We multiply (y>n−1, z > n ) to (3) and use the elementary identities,
a>(a− b) = a >a
2 − b
>b 2 + 1 2 (a− b)>(a− b), b>(a− b) = a >a 2 − b >b 2 − 1 2 (a− b)>(a− b),
to obtain the following,
y>n yn + z > n zn
2 =
y>n−1yn−1 + z > n−1zn−1
2 + (yn − yn−1)>(yn − yn−1) 2
− (zn − zn−1) >(zn − zn−1) 2 + ∆tz>n σ(An−1)−∆tz>n zn ≤ y > n−1yn−1 + z > n−1zn−1
2 + ∆t (1/2 + ∆t/2− 1) z>n zn +
∆t
2 σ>(An−1)σ(An−1)
≤ y > n−1yn−1 + z > n−1zn−1
2 + m∆t 2 as σ2 ≤ 1 and > ∆t << 1.
Iterating the above inequality n times leads to the energy bound,
y>n yn + z > n zn ≤ y>0 y0 + z>0 z0 + nm∆t = mtn, (24)
as y0 = z0 = 0.
E.2 SENSITIVITY TO INPUTS
Next, we examine how changes in the input signal u affect the dynamics. We have the following proposition:
Proposition E.1 Let yn, zn be the hidden states of the trained RNN (4) with respect to the input u = {un}Nn=1 and let yn, zn be the hidden states of the same RNN (4), but with respect to the input u = {un}Nn=1, then the differences in the hidden states are bounded by,
(yn − yn)> (yn − yn) + (zn − zn)> (zn − zn) ≤ 4mtn. (25)
The proof of this proposition is completely analogous to the proof of proposition 3.1, we subtract
yn = yn−1 + ∆tzn, zn = zn−1 1+∆t + ∆t 1+∆tσ(An−1)− ∆t1+∆tyn−1, An−1 := Wyn−1 + Wzn−1 + Vun + b.
(26) from (4) and multiply ( (yn − yn)> , (zn − zn)> ) to the difference. The estimate (25) follows
identically to the proof of (5) (presented above) by realizing that σ(An−1)− σ(An−1) ≤ 2. Note that the bound (25) ensures that the hidden states can only separate linearly in time for changes in the input. Thus, chaotic behavior, such as for Duffing type oscillators, characterized by at least exponential separation of trajectories, is ruled out for this proposed RNN, showing that it is stable with respect to changes in the input. This is largely on account of the fact that the activation function σ in (3) is globally bounded.
E.3 PROOF OF PROPOSITION 3.2
From (6), we readily calculate that,
∂En ∂Xn = [yn − ȳn, 0] . (27)
Similarly from (3), we calculate,
∂+Xk ∂θ = [( ∆t2 1+∆tZ i,j m,m(Ak−1)yk−1 )> , ( ∆t 1+∆tZ i,j m,m(Ak−1)yk−1 )>]> if θ = (i, j)−th entry of W,[( ∆t2 1+∆tZ i,j m,m(Ak−1)zk−1 )> , ( ∆t 1+∆tZ i,j m,m(Ak−1)zk−1 )>]> if θ = (i, j)−th entry of W,[( ∆t2 1+∆tZ i,j m,d(Ak−1)uk )> , ( ∆t 1+∆tZ i,j m,d(Ak−1)uk )>]> if θ = (i, j)−th entry of V,[( ∆t2
1+∆tZ i,1 m,1(Ak−1)
)> , (
∆t 1+∆tZ i,1 m,1(Ak−1)
)>]> if θ = i−th entry of b,
(28) where Zi,jm,m̄(Ak−1) ∈ Rm×m̄ is a matrix with all elements are zero except for the (i, j)-th entry which is set to σ′(Ak−1)i, i.e. the i-th entry of σ′(Ak−1). We easily see that ‖Zi,jm,m̄(Ak−1)‖∞ ≤ 1 for all i, j,m, m̄ and all choices of Ak−1.
Now, using definitions of matrix and vector norms and applying (14) in (10), together with (27) and (28), we obtain the following estimate on the norm:
∣∣∣∣∣∂E(k)n∂θ ∣∣∣∣∣ ≤ (‖yn‖∞ + ‖ȳn‖∞)(1 + 3(n− k)∆tr)δ∆t‖yk−1‖∞, if θ is entry of W, (‖yn‖∞ + ‖ȳn‖∞)(1 + 3(n− k)∆tr)δ∆t‖zk−1‖∞, if θ is entry of W, (‖yn‖∞ + ‖ȳn‖∞)(1 + 3(n− k)∆tr)δ∆t‖uk‖∞, if θ is entry of V, (‖yn‖∞ + ‖ȳn‖∞)(1 + 3(n− k)∆tr)δ∆t, if θ is entry of b. (29)
We will estimate the above term, just for the case of θ is an entry of W, the rest of the terms are very similar to estimate.
For simplicity of notation, we let k − 1 ≈ k and aim to estimate the term,∣∣∣∣∣∂E(k)n∂θ ∣∣∣∣∣ ≤ ‖yn‖∞‖yk‖∞(1 + 3(n− k)∆tr)δ∆t+ ‖ȳn‖∞‖yk‖∞(1 + 3(n− k)∆tr)δ∆t ≤ m √ nk∆t(1 + 3(n− k)∆tr)δ∆t+ ‖ȳn‖∞ √ mk √
∆t(1 + 3(n− k)∆tr)δ∆t (by (5)) ≤ m √ nkδ∆t2 + 3m √ nk(n− k)δ∆tr+2 + ‖ȳn‖∞ √ mk √
∆t(1 + 3(n− k)∆tr)δ∆t. (30)
To further analyze the above estimate, we recall that n∆t = tn ≤ 1 and consider two different regimes. Let us start by considering short-term dependencies by letting k ≈ n, i.e n− k = c with constant c ∼ O(1), independent of n, k. In this case, a straightforward application of the above assumptions in the bound (30) yields,∣∣∣∣∣∂E(k)n∂θ
∣∣∣∣∣ ≤ m√nkδ∆t2 + 3m√nk(n− k)δ∆tr+2 + ‖ȳn‖∞√m√tnδ∆t+ ‖ȳn‖∞√m√tncδ∆tr+1 ≤ mtnδ∆t+mctnδ∆tr+1 + ‖ȳn‖∞ √ m √ tnδ∆t+ ‖ȳn‖∞ √ m √ tncδ∆t r+1
≤ tnmδ∆t+ ‖ȳn‖∞ √ m √ tnδ∆t (for ∆t << 1 as r ≥ 1/2) ≤ mδ∆t+ ‖ȳn‖∞ √ mδ∆t.
(31)
Next, we consider long-term dependencies by setting k << n and estimating,∣∣∣∣∣∂E(k)n∂θ ∣∣∣∣∣ ≤ m√nkδ∆t2 + 3m√nk(n− k)δ∆tr+2 + ‖ȳn‖∞√mδ∆t 32 + 3‖ȳn‖∞√mnδ∆tr+ 32 ≤ m √ tnδ∆t 3 2 + 3mt 3 2 n δ∆t r+ 12 + ‖ȳn‖∞ √ mδ∆t 3 2 + 3‖ȳn‖∞ √ mtnδ∆t r+ 12
≤ mδ∆t 32 + 3mδ∆tr+ 12 + ‖ȳn‖∞ √ mδ∆t 3 2 + 3‖ȳn‖∞ √ mδ∆tr+ 1 2 (as tn < 1) ≤ 3mδ∆tr+ 12 + 3‖ȳn‖∞ √ mδ∆tr+ 1 2 (as r ≤ 1 and ∆t << 1).
(32) Thus, in all cases, we have that,∣∣∣∣∣∂E(k)n∂θ ∣∣∣∣∣ ≤ 3δ∆t (m+√m‖ȳn‖∞) (as r ≥ 1/2). (33) Applying the above estimate in (10) allows us to bound the gradient by,∣∣∣∣∂En∂θ ∣∣∣∣ ≤ ∑ 1≤k≤n ∣∣∣∣∣∂E(k)n∂θ ∣∣∣∣∣ ≤ 3δtn (m+√m‖ȳn‖∞) . (34)
Therefore, the gradient of the loss function (6) can be bounded as,∣∣∣∣∂E∂θ ∣∣∣∣ ≤ 1N N∑ n=1 ∣∣∣∣∂En∂θ ∣∣∣∣
≤ 3δ [ m∆t
N N∑ n=1 n+ √ m∆t N N∑ n=1
‖ȳn‖∞n ]
≤ 3δ [ m∆t
N N∑ n=1 n+ √ mȲ∆t N N∑ n=1 n
]
≤ 3 2 δ(N + 1)∆t
( m+ Ȳ √ m )
≤ 3 2 δ(tN + ∆t)
( m+ Ȳ √ m )
≤ 3 2 δ(1 + ∆t)
( m+ Ȳ √ m )
(as tN = 1)
≤ 3 2
( m+ Ȳ √ m ) ,
(35)
which is the desired estimate (9).
E.4 ON THE ASSUMPTION (8) AND TRAINING
Note that all the estimates were based on the fact that we were able to choose a time step ∆t in (3) that enforces the condition (8). For any fixed weights W,W, we can indeed choose such a value of to satisfy (8). However, we train the RNN to find the weights that minimize the loss function (6). Can we find a hyperparameter ∆t such that (8) is satisfied at every step of the stochastic gradient descent method for training?
To investigate this issue, we consider a simple gradient descent method of the form:
θ`+1 = θ` − ζ ∂E
∂θ (θ`). (36)
Note that ζ is the constant (non-adapted) learning rate. We assume for simplicity that θ0 = 0 (other choices lead to the addition of a constant). Then, a straightforward estimate on the weight is given by,
|θ`+1| ≤ |θ`|+ ζ ∣∣∣∣∂E∂θ (θ`) ∣∣∣∣ ≤ |θ`|+ ζ 3
2
( m+ Ȳ √ m ) (by (35))
≤ |θ0|+ `ζ 3
2
( m+ Ȳ √ m ) = `ζ 3
2
( m+ Ȳ √ m ) .
(37)
In order to calculate the minimum number of steps L in the gradient descent method (36) such that the condition (8) is satisfied, we set ` = L in (37) and applying it to the condition (8) leads to the straightforward estimate,
L ≥ 1 ζ 32 ( m+ Ȳ √ m ) m∆t1−rδ . (38)
Note that the parameter δ < 1, while in general, the learning rate ζ << 1. Thus, as long as r ≤ 1, we see that the assumption (8) holds for a large number of steps of the gradient descent method. We remark that the above estimate (38) is a large underestimate on L. In the experiments presented in this article, we are able to take a very large number of training steps, while the gradients remain within a range (see Fig. 3).
E.5 PROOF OF PROPOSITION 3.3
We start with the following decomposition of the recurrent matrices:
∂Xi ∂Xi−1 = Mi−1 + ∆tM̃i−1,
Mi−1 :=
[ I ∆tCi−1
Bi−1 Ci−1
] , M̃i−1 := [ Bi−1 0
0 0
] ,
with B,C defined in (12). By the assumption (8), one can readily check that ‖M̃i−1‖∞ ≤ ∆t, for all k ≤ i ≤ n− 1. We will use an induction argument to show the following representation formula for the product of Jacobians,
∂Xn ∂Xk
= ∏
k<i≤n
∂Xi ∂Xi−1 = I ∆t n−1∑ j=k k∏ i=j Ci
Bn−1 + k∑
j=n−2 ( j+1∏ i=n−1 Ci ) Bj
k∏ i=n−1 Ci +O(∆t). (39) We start by the outermost product and calculate,
∂Xn ∂Xn−1 ∂Xn−1 ∂Xn−2
= ( Mn−1 + ∆tM̃n−1 )( Mn−2 + ∆tM̃n−2 ) = Mn−1Mn−2 + ∆t(M̃n−1Mn−2 +Mn−1M̃n−2) +O(∆t2).
By direct multiplication, we obtain,
Mn−1Mn−2 =
[ I ∆t (Cn−2 + Cn−1Cn−2)
Bn−1 + Cn−1Bn−2 Cn−1Cn−2 ] + ∆t [ Cn−1Bn−2 0
0 Bn−1Cn−2
] .
Using the definitions in (12) and (8), we can easily see that[ Cn−1Bn−2 0
0 Bn−1Cn−2
] = O(∆t).
Similarly, it is easy to show that
M̃n−1Mn−2,Mn−1M̃n−2 ∼ O(∆t).
Plugging all the above estimates yields,
∂Xn ∂Xn−1 ∂Xn−1 ∂Xn−2 =
[ I ∆t (Cn−2 + Cn−1Cn−2)
Bn−1 + Cn−1Bn−2 Cn−1Cn−2
] +O(∆t2),
which is exactly the form of the leading term (39).
Iterating the above calculations (n− k) times and realizing that (n− k)∆t2 ≈ n∆t2 = tn∆t yields the formula (39).
Recall that we have set θ = Wi,j , for some 1 ≤ i, j ≤ m in proposition 3.3. Directly calculating with (27), (28) and the representation formula (39) yields the formula,
∂E (k) n
∂θ = y>n∆t 2δZi,jm,m(Ak−1)yk−1 + y > n∆t 2δC∗Zi,jm,m(Ak−1)yk−1 +O(∆t3), (40)
with matrix C∗ defined as,
C∗ := n−1∑ j=k k∏ i=j Ci,
and Zi,jm,m(Ak−1) ∈ Rm×m is a matrix with all elements are zero except for the (i, j)-th entry which is set to σ′(aik−1), i.e. the i-th entry of σ ′(Ak−1).
Note that the formula (40) can be explicitly written as,
∂E (k) n
∂θ = δ∆t2σ′(aik−1)y i ny j k−1 + δ∆t 2σ′(aik−1) m∑ `=1 C∗`iy ` ny j k−1 +O(∆t3), (41)
with yjn denoting the j-th element of vector yn, and
aik−1 := m∑ `=1 Wi`y ` k−1 + m∑ `=1 Wi`z ` k−1. (42)
By the assumption (8), we can readily see that
‖W‖∞, ‖W‖∞ ≤ 1 + ∆t. Therefore by the fact that σ′ = sech2, the assumption yik = O( √ tk) and (42), we obtain,
ĉ = sech2( √ k∆t(1 + ∆t) ≤ σ′(ak−1i ) ≤ 1. (43)
Using (43) in (41), we obtain,
δ∆t2σ′(aik−1)y i ny j k−1 = O
( ĉδ∆t 5 2 ) . (44)
Using the definition of Ci, we can expand the product in C∗ and neglect terms of order O(∆t4), to obtain
k∏ i=j Ci = (O(1) +O((j − k + 1)δ∆t2))I.
Summing over j and using the fact that k << n, we obtain that
C∗ = (O(n) +O(δ∆t0))I. (45) Plugging (45) and (43) into (41) leads to,
δ∆t2σ′(aik−1) m∑ `=1 C∗`iy ` ny j k−1 = O ( ĉδ∆t 3 2 ) +O ( ĉδ2∆t 5 2 ) . (46)
Combining (44) and (46) yields the desired estimate (16).
Remark. A careful examination of the above proof reveals that the constants hidden in the prefactors of the leading term O ( ĉδ∆t 3 2 ) of (16) stem from the formula (46). Here, we have used the assumption that yik = O( √ tk). Note that this assumption implicitly assumes that the energy bound (5) is equidistributed among all the elements of the vector yk and results in the obfuscation of the constants in the leading term of (16). Given that the energy bound (5) is too coarse to allow for precise upper and lower bounds on each individual element of the hidden state vector yk, we do not see any other way of, in general, determining the distribution of energy among individual entries of the hidden state vector. Thus, assuming equidistribution seems reasonable. On the other hand, in practice, one has access to all the terms in formula (46) for each numerical experiment and if one is interested, then one can directly evaluate the precise bound on the leading term of the formula (16).
F RIGOROUS ESTIMATES FOR THE RNN (3) WITH n̄ = n− 1 AND GENERAL VALUES OF , γ
In this section, we will provide rigorous estimates, similar to that of propositions 3.1, E.1 and 3.2 for the version of coRNN (3) that results by setting n̄ = n− 1 in (3) leading to,
yn = yn−1 + ∆tzn, zn = zn−1 + ∆tσ (Wyn−1 + Wzn−1 + Vun + b)−∆tγyn−1 −∆t zn−1. (47)
Note that (47) can be equivalently written as, yn = yn−1 + ∆tzn,
zn = (1− ∆t) zn−1 + ∆tσ (Wyn−1 + Wzn−1 + Vun + b)−∆tγyn−1. (48)
We will also consider the case of non-unit values of the control parameters γ and below.
Bounds on Hidden states. We start the following bound on the hidden states of (47),
Proposition F.1 Let the damping parameter > 12 and the time step ∆t in the RNN (47) satisfy the following condition,
∆t < 2 − 1 γ + 2 . (49)
Let yn, zn be the hidden states of the RNN (47) for 1 ≤ n ≤ N , then the hidden states satisfy the following (energy) bounds:
y>n yn + 1
γ z>n zn ≤ mtn γ . (50)
We set An−1 = Wyn−1 +Wzn−1 + Vun−1 + b and as in the proof of proposition 3.1, we multiply (y>n−1, 1 γ z > n ) to (47) and use elementary identities and rearrange terms to obtain,
y>n yn 2 + z>n zn 2γ = y>n−1yn−1 2 + z>n−1zn−1 2γ + (yn − yn−1)>(yn − yn−1) 2
− (zn − zn−1) >(zn − zn−1) 2γ
+ ∆t
γ z>n σ(An−1)−
∆t γ z>n zn + ∆t γ z>n (zn − zn−1) .
We use a rescaled version of the well-known Cauchy’s inequality
ab ≤ ca 2 2 + b2 2c ,
for a constant c > 0 to be determined, to rewrite the above identity as, y>n yn
2 + z>n zn 2γ ≤ y > n−1yn−1 2 + z>n−1zn−1 2γ + (yn − yn−1)>(yn − yn−1) 2
+
( ∆t
2cγ − 1 2γ
) (zn − zn−1)>(zn − zn−1) + ∆t
2γ σ(An−1)
>σ(An−1)
+
( ∆t
2γ + c ∆t 2γ − ∆t γ
) z>n zn.
Using the first equation in (47), the above inequality reduces to,
y>n yn 2 + z>n zn 2γ ≤ y > n−1yn−1 2 + z>n−1zn−1 2γ
+
( ∆t
2cγ − 1 2γ
) (zn − zn−1)>(zn − zn−1) + ∆t
2γ σ(An−1)
>σ(An−1)
+
( ∆t2
2 +
∆t 2γ + c ∆t 2γ − ∆t γ
) z>n zn.
As long as,
∆t ≤ min ( c ,
(2− c) − 1 γ
) , (51)
we can easily check that,
y>n yn 2 + z>n zn 2γ ≤ y > n−1yn−1 2 + z>n−1zn−1 2γ + ∆t 2γ σ(An−1) >σ(An−1)
≤ y > n−1yn−1
2 +
z>n−1zn−1
2γ + m∆t 2γ (σ ≤ 1).
Iterating the above bound till n = 0 and using the zero initial data yields the desired (50) as long as we find a c such that the condition (51) is satisfied. To do so, we equalize the two terms on the right hand side of (51) to obtain,
c = (2 − 1) γ + 2 .
From the assumption (49) and the fact that > 12 , we see that such a c > 0 always exists for any value of γ > 0 and (51) is satisfied, which completes the proof.
We remark that the same bound on the hidden states is obtained for both versions of coRNN, i.e. (3) with n̄ = n and (47). However, the difference lies in the constraint on the time step ∆t. In contrast to (49), a careful examination of the proof of proposition 3.1 reveals that the condition on the time step for the stability of (3) with n̄ = n is given by,
∆t < 2 − 1 γ , (52)
and is clearly less stringent than the condition (51) for the stability of (47). For instance, in the prototypical case of γ = = 1, the stability of (3) with n̄ = n is ensured for any ∆t < 1. On the other hand, the stability of (47) is ensured as long as ∆t < 12 . However, it is essential to recall that these conditions are only sufficient to ensure stability and are by no means necessary. Thus in practice, the coRNN version (47) is found to be stable in the same range of time steps as the version (3) with n̄ = n.
On the exploding and vanishing gradient problems for coRNN (47) Next, we have the following upper bound on the hidden state gradients for the version (47) of coRNN,
Proposition F.2 Let yn, zn be the hidden states generated by the RNN (47). We assume that the damping parameter > 12 and the time step ∆t can be chosen such that in addition to (51) it also satisfies,
max {∆t(γ + ‖W‖∞),∆t‖W‖∞} = η ≤ C̃∆tr, 1
2 ≤ r ≤ 1, (53)
and with the constant C̃ independent of the other parameters of the RNN (47). Then the gradient of the loss function E (6) with respect to any parameter θ ∈ Θ is bounded as,∣∣∣∣∂E∂θ ∣∣∣∣ ≤ 3(C̃) ( m+ Ȳ √ m ) 2γ , (54)
with the constant C̃, defined in (53) and Ȳ = max 1≤n≤N ‖ȳn‖∞ be a bound on the underlying training data
The proof of this proposition is completely analogous to the proof of proposition 3.2 and we omit the details here.
Note that the bound (54) enforces that hidden state gradients cannot explode for version (47) of coRNN. A similar statement for the vanishing gradient problem is inferred from the proposition below.
Proposition F.3 Let yn be the hidden states generated by the RNN (47). Under the assumption that yin = O( √ tn γ ), for all 1 ≤ i ≤ m and (53), the gradient for long-term dependencies satisfies,
∂E (k) n
∂θ = O
( ĉ
γ ∆t
3 2 ) +O ( ĉ
γ δ(1 + δ)∆t
5 2 ) +O(∆t3), ĉ = sech2 (√ k∆t(1 + ∆t) ) k << n.
(55)
The proof is a repetition of the steps of the proof of proposition 3.3, with suitable modifications for the structure of the RNN and non-unit , γ and we omit the tedious calculations here. Note that (55) rules out the vanishing gradient problem for the coRNN version (47). | 1. What is the main contribution of the paper regarding recurrent units?
2. What are the strengths and weaknesses of the proposed approach, particularly in its connection to coupled oscillators and its ability to mitigate vanishing and exploding gradients?
3. Why did the authors choose the IMEX scheme in Equation (3), and do they use an explicit scheme in practice? How does this choice affect the analysis and training process?
4. How effective is Section 3 in conveying the value of the sketch of proofs, and what changes could improve the discussion of the analysis and properties of the proposed unit?
5. Can the authors provide more context on why they want to rule out chaotic behavior and discuss the effect of dampening and forcing on the unit's performance?
6. Are there any concerns about the reported performance on the sMNIST task, and can the authors double-check their experiments?
7. How sensitive is the RNN to choices of gamma and epsilon, and can the authors provide ablation studies or insight into how difficult it is to tune the unit?
8. Would including related work or results for a language modeling task enhance the paper?
9. Is there a mistake in the citation for the Fast RNN in Table 2? | Review | Review
This paper proposes a new continuous-time formulation for modeling recurrent units. The particular form of the recurrent unit is motivated by a system of coupled oscillators. These systems are well studied and widely used in the physical, engineering and biological sciences. Establishing this connection has the potential to motivate interesting future works. The performance of the proposed recurrent unit is state of the art.
Reasons for my score: Overall, I vote for marginally above acceptance threshold. I like very much the proposed approach for modeling recurrent units. Further, the presented results are intriguing, and the paper is well written. However, I have some concerns (see below). I am happy to increase my score if the authors can address my concerns in the rebuttal period.
Pros:
Second-order systems of ODEs seem to be a promising approach for modeling recurrent units, and this approach has not received much attention for this task before. Indeed, this paper impressively demonstrates that a unit motivated by a system of coupled oscillators is able to achieve state of the art performance on a range of benchmark tasks.
The analysis shows that the particular form of the proposed continuous-time unit mitigates the vanishing and exploding gradients problem by design, which is very appealing. The analysis is mathematically sound.
Code is provided!
Cons:
In Eq. (3) the authors advocate the IMEX scheme to obtain a discretization of (2). I was very curious to see how the authors implement this scheme in practice, however, the provided implementation revealed that the authors use an explicit scheme in practice. Please, comment why you chose the IMEX scheme here. Does the analysis also hold if you use an explicit discretization in (3), and if, why do you mask the fact that you are using an explicit scheme in practice. I feel, it would be relevant to discuss how you train the unit in practice.
Section 3 is no pleasure to read. It is not clear to me what the value of the sketch of the proofs are, since the proofs are pretty standard. Instead, the space could be better used for an extended qualitative discussion of the analysis and the nice properties of the proposed recurrent unit. For instance, you can extend the discussion around proposition 3.1 and provide some context on why you want to rule out chaotic behavior (this might not be obvious for everyone); further it would be nice to see a better discussion on the effect of dampening and forcing on the performance of the recurrent unit. Also, I would like to suggest to move parts of Appendix B into the main text, since this discussion actually helps to build some intuition for the proposed unit.
The extremely good performance on the sMNIST task is slightly surprising, since my intuition would not suggest that the particular form of the unit has the ability to substantially improve the expressivity as compared to some other recently proposed units. I used the provided code to evaluate the coRNN (N=256 and 128) on the sMNIST task and the highest accuracy that I was able to obtain (out of 8 runs on 4 different GPUs) was 99.2% on the test set. These results are still very good, but they do not match the reported results. (Note, that the code is printing out the accuracy for a smaller validation set which indicates a higher accuracy than is actually obtained on the test set.) This said, I would like to ask the authors to double check the experiments on sMNIST. (Also, I assume that the model can be trained in less time if the learning rate is decayed much earlier, e.g., around epoch 30 and a second time around epoch 60.)
It is not clear to me how sensitive the RNN is to the particular choices of \gamma and \epsilon. It would be good to provide some form of ablation study that studies how the performance varies for different values of \gamma (and \epsilon) while keeping all other tuning parameters fixed (I assume that you have all these results handy since you have performed an extensive hyperparamter search). This would help to gain some better intuition for how difficult it is to tune the proposed unit. In other words, I would like to see how sharp the performance drop is if you perturb the tuning parameters slightly (i.e., plot the test accuracy as a function of \gamma and \epsilon).
Minor comments:
Given additional space, it would be nice to see an extended related work section.
It would be nice to so results for a language modeling task.
In Table 2, the citation for the Fast RNN is incorrect. |
ICLR | Title
Coupled Oscillatory Recurrent Neural Network (coRNN): An accurate and (gradient) stable architecture for learning long time dependencies
Abstract
Circuits of biological neurons, such as in the functional parts of the brain can be modeled as networks of coupled oscillators. Inspired by the ability of these systems to express a rich set of outputs while keeping (gradients of) state variables bounded, we propose a novel architecture for recurrent neural networks. Our proposed RNN is based on a time-discretization of a system of second-order ordinary differential equations, modeling networks of controlled nonlinear oscillators. We prove precise bounds on the gradients of the hidden states, leading to the mitigation of the exploding and vanishing gradient problem for this RNN. Experiments show that the proposed RNN is comparable in performance to the state of the art on a variety of benchmarks, demonstrating the potential of this architecture to provide stable and accurate RNNs for processing complex sequential data.
1 INTRODUCTION
Recurrent neural networks (RNNs) have achieved tremendous success in a variety of tasks involving sequential (time series) inputs and outputs, ranging from speech recognition to computer vision and natural language processing, among others. However, it is well known that training RNNs to process inputs over long time scales (input sequences) is notoriously hard on account of the so-called exploding and vanishing gradient problem (EVGP) (Pascanu et al., 2013), which stems from the fact that the well-established BPTT algorithm for training RNNs requires computing products of gradients (Jacobians) of the underlying hidden states over very long time scales. Consequently, the overall gradient can grow (to infinity) or decay (to zero) exponentially fast with respect to the number of recurrent interactions.
A variety of approaches have been suggested to mitigate the exploding and vanishing gradient problem. These include adding gating mechanisms to the RNN in order to control the flow of information in the network, leading to architectures such as long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997) and gated recurring units (GRU) (Cho et al., 2014), that can overcome the vanishing gradient problem on account of the underlying additive structure. However, the gradients might still explode and learning very long term dependencies remains a challenge (Li et al., 2018). Another popular approach for handling the EVGP is to constrain the structure of underlying recurrent weight matrices by requiring them to be orthogonal (unitary), leading to the so-called orthogonal RNNs (Henaff et al., 2016; Arjovsky et al., 2016; Wisdom et al., 2016; Kerg et al., 2019) and references therein. By construction, the resulting Jacobians have eigen- and singular-spectra with unit norm, alleviating the EVGP. However as pointed out by Kerg et al. (2019), imposing such constraints on the recurrent matrices may lead to a significant loss of expressivity of the RNN resulting in inadequate performance on realistic tasks.
In this article, we adopt a different approach, based on observation that coupled networks of controlled non-linear forced and damped oscillators, that arise in many physical, engineering and biological
systems, such as networks of biological neurons, do seem to ensure expressive representations while constraining the dynamics of state variables and their gradients. This motivates us to propose a novel architecture for RNNs, based on time-discretizations of second-order systems of non-linear ordinary differential equations (ODEs) (1) that model coupled oscillators. Under verifiable hypotheses, we are able to rigorously prove precise bounds on the hidden states of these RNNs and their gradients, enabling a possible solution of the exploding and vanishing gradient problem, while demonstrating through benchmark numerical experiments, that the resulting system still retains sufficient expressivity, i.e. ability to process complex inputs, with a competitive performance, with respect to the state of the art, on a variety of sequential learning tasks.
2 THE PROPOSED RNN
Our proposed RNN is based on the following second-order system of ODEs,
y′′ = σ (Wy + Wy′ + Vu + b)− γy − y′. (1) Here, t ∈ [0, 1] is the (continuous) time variable, u = u(t) ∈ Rd is the time-dependent input signal, y = y(t) ∈ Rm is the hidden state of the RNN with W,W ∈ Rm×m, V ∈ Rm×d are weight matrices, b ∈ Rm is the bias vector and 0 < γ, are parameters, representing oscillation frequency and the amount of damping (friction) in the system, respectively. σ : R 7→ R is the activation function, set to σ(u) = tanh(u) here. By introducing the so-called velocity variable z = y′(t) ∈ Rm, we rewrite (1) as the first-order system:
y′ = z, z′ = σ (Wy + Wz + Vu + b)− γy − z. (2) We fix a timestep 0 < ∆t < 1 and define our proposed RNN hidden states at time tn = n∆t ∈ [0, 1] (while omitting the affine output state) as the following IMEX (implicit-explicit) discretization of the first order system (2):
yn = yn−1 + ∆tzn, zn = zn−1 + ∆tσ (Wyn−1 + Wzn−1 + Vun + b)−∆tγyn−1 −∆t zn̄, (3)
with either n̄ = n or n̄ = n− 1. Note that the only difference in the two versions of the RNN (3) lies in the implicit (n̄ = n) or explicit (n̄ = n− 1) treatment of the damping term − z in (2), whereas both versions retain the implicit treatment of the first equation in (2).
Motivation and background. To see that the underlying ODE (2) models a coupled network of controlled forced and damped nonlinear oscillators, we start with the single neuron (scalar) case by setting d = m = 1 in (1) and assume an identity activation function σ(x) = x. Setting W = W = V = b = = 0 leads to the simple ODE, y′′ + γy = 0, which exactly models simple harmonic motion with frequency γ, for instance that of a mass attached to a spring (Guckenheimer & Holmes, 1990). Letting > 0 in (1) adds damping or friction to the system (Guckenheimer & Holmes, 1990). Then, by introducing non-zero V in (1), we drive the system with a driving force proportional to the input signal u(t). The parameters V,b modulate the effect of the driving force, W controls the frequency of oscillations and W the amount of damping in the system. Finally, the tanh activation mediates a non-linear response in the oscillator. In the coupled network (2) with m > 1, each neuron updates its hidden state based on the input signal as well as information from other neurons. The diagonal entries of W (and the scalar hyperparameter γ) control the frequency whereas the diagonal entries of W (and the hyperparameter ) determine the amount of damping for each neuron, respectively, whereas the non-diagonal entries of these matrices modulate interactions between neurons. Hence, given this behavior of the underlying ODE (2), we term the RNN (3) as a coupled oscillatory Recurrent Neural Network (coRNN).
The dynamics of the ODE (2) (and the RNN (3)) for a single neuron are relatively straightforward. As we illustrate in Fig. 6 of supplementary material SM§C, input signals drive the generation of (superpositions of) oscillatory wave-forms, whose amplitude and (multiple) frequencies are controlled by the tunable parameters W,W,V,b. Adding a tanh activation does not change these dynamics much. This is in contrast to truncating tanh to leading non-linear order by setting σ(x) = x− x3/3, which yields a Duffing type oscillator that is characterized by chaotic behavior (Guckenheimer & Holmes, 1990). Adding interactions between neurons leads to further accentuation of this generation of superposed wave forms (see Fig. 6 in SM§C) and even with very simple network topologies, one
sees the emergence of non-trivial non-oscillatory hidden states from oscillatory inputs. In practice, a network of a large number of neurons is used and can lead to extremely rich global dynamics. Hence, we argue that the ability of a network of (forced, driven) oscillators to access a very rich set of output states may lead to high expressivity of the system, allowing it to approximate outputs from complicated sequential inputs.
Oscillator networks are ubiquitous in nature and in engineering systems (Guckenheimer & Holmes, 1990; Strogatz, 2015) with canonical examples being pendulums (classical mechanics), business cycles (economics), heartbeat (biology) for single oscillators and electrical circuits for networks of oscillators. Our motivating examples arise in neurobiology, where individual biological neurons can be viewed as oscillators with periodic spiking and firing of the action potential. Moreover, functional circuits of the brain, such as cortical columns and prefrontal-striatal-hippocampal circuits, are being increasingly interpreted by networks of oscillatory neurons, see Stiefel & Ermentrout (2016) for an overview. Following well-established paths in machine learning, such as for convolutional neural networks (LeCun et al., 2015), our focus here is to abstract the essence of functional brain circuits being networks of oscillators and design an RNN based on much simpler mechanistic systems, such as those modeled by (2), while ignoring the complicated biological details of neural function.
Related work. There is an increasing trend of basing RNN architectures on ODEs and dynamical systems. These approaches can roughly be classified into two branches, namely RNNs based on discretized ODEs and continuous-time RNNs. Examples of continuous-time approaches include neural ODEs (Chen et al., 2018) with ODE-RNNs (Rubanova et al., 2019) as its recurrent extension as well as E (2017) and references therein, to name just a few. We focus, however, in this article on an ODE-inspired discrete-time RNN, as the proposed coRNN is derived from a discretization of the ODE (1). A good example for a discrete-time ODE-based RNNs is the so-called anti-symmetric RNN of Chang et al. (2019), where the RNN architecture is based on a stable ODE resulting from a skew-symmetric hidden weight matrix, thus constraining the stable (gradient) dynamics of the network. This approach has much in common with previously mentioned unitary/orthogonal/nonnormal RNNs in constraining the structure of the hidden-to-hidden layer weight matrices. However, adding such strong constraints might reduce expressivity of the resulting RNN and might lead to inadequate performance on complex tasks. In contrast to these approaches, our proposed coRNN does not explicitly constrain the weight matrices but relies on the dynamics of the underlying ODE (and the IMEX discretization (3)), to provide gradient stability. Moreover, no gating mechanisms as in LSTMs/GRUs are used in the current version of coRNN. There is also an increasing interest in designing hybrid methods, which use a discretization of an ODE (in particular a Hamiltonian system) in order to learn the continuous representation of the data, see for instance Greydanus et al. (2019); Chen et al. (2020). Overall, our approach here differs from these papers in our use of networks of oscillators to build the RNN.
3 RIGOROUS ANALYSIS OF THE PROPOSED RNN
An attractive feature of the underlying ODE system (2) lies in the fact that the resulting hidden states (and their gradients) are bounded (see SM§D for precise statements and proofs). Hence, one can expect that a suitable discretization of the ODE (2) that preserves these bounds will not have exploding gradients. We claim that one such structure preserving discretization is given by the IMEX discretization that results in the RNN (3) and proceed to derive bounds on this RNN below.
Following standard practice we set y(0) = z(0) = 0 and purely for the simplicity of exposition, we set the control parameters, = γ = 1 and n̄ = n in (3) leading to,
yn = yn−1 + ∆tzn, zn = zn−1 1+∆t + ∆t 1+∆tσ(An−1)− ∆t1+∆tyn−1, An−1 := Wyn−1 + Wzn−1 + Vun + b. (4)
Analogous results and proofs for the case where n̄ = n− 1 and for general values of , γ are provided in SM§F.
Bounds on the hidden states. As with the underlying ODE (2), the hidden states of the RNN (3) are bounded, i.e.
Proposition 3.1 Let yn, zn be the hidden states of the RNN (4) for 1 ≤ n ≤ N , then the hidden states satisfy the following (energy) bounds:
y>n yn + z > n zn ≤ nm∆t = mtn ≤ m. (5)
The proof of the energy bound (5) is provided in SM§E.1 and a straightforward variant of the proof (see SM§E.2) yields an estimate on the sensitivity of the hidden states to changing inputs. As with the underlying ODE (see SM§D) , this bound rules out chaotic behavior of hidden states.
Bounds on hidden state gradients. We train the RNN (3) to minimize the loss function,
E := 1
N N∑ n=1 En, En = 1 2 ‖yn − ȳn‖22, (6)
with ȳ being the underlying ground truth (training data). During training, we compute gradients of the loss function (6) with respect to the weights and biases Θ = [W,W,V,b], i.e.
∂E ∂θ = 1 N N∑ n=1 ∂En ∂θ , ∀ θ ∈ Θ. (7)
Proposition 3.2 Let yn, zn be the hidden states generated by the RNN (4). We assume that the time step ∆t << 1 can be chosen such that,
max
{ ∆t(1 + ‖W‖∞)
1 + ∆t , ∆t‖W‖∞ 1 + ∆t
} = η ≤ ∆tr, 1
2 ≤ r ≤ 1. (8)
Denoting δ = 11+∆t , the gradient of the loss function E (6) with respect to any parameter θ ∈ Θ is bounded as, ∣∣∣∣∂E∂θ ∣∣∣∣ ≤ 32 (m+ Ȳ√m) , (9) with Ȳ = max
1≤n≤N ‖ȳn‖∞ be a bound on the underlying training data.
Sketch of the proof. Denoting Xn = [yn, zn], we can apply the chain rule repeatedly (for instance as in Pascanu et al. (2013)) to obtain,
∂En ∂θ
= ∑
1≤k≤n
∂En ∂Xn ∂Xn ∂Xk ∂+Xk ∂θ︸ ︷︷ ︸
∂E (k) n ∂θ
. (10)
Here, the notation ∂ +Xk ∂θ refers to taking the partial derivative of Xk with respect to the parameter θ, while keeping the other arguments constant. This quantity can be readily calculated from the structure of the RNN (4) and is presented in the detailed proof provided in SM§E.3. From (6), we can directly compute that ∂En∂Xn = [yn − ȳn, 0] . Repeated application of the chain rule and a direct calculation with (4) yields,
∂Xn ∂Xk
= ∏
k<i≤n
∂Xi ∂Xi−1 , ∂Xi ∂Xi−1 =
[ I + ∆tBi−1 ∆tCi−1
Bi−1 Ci−1
] , (11)
where I is the identity matrix and
Bi−1 = δ∆t (diag(σ ′(Ai−1))W − I) , Ci−1 = δ (I + ∆tdiag(σ′(Ai−1))W) . (12)
It is straightforward to calculate using the assumption (8) that ‖Bi−1‖∞ < η and ‖Ci−1‖∞ ≤ η+ δ. Using the definitions of matrix norms and (8), we obtain:∥∥∥∥ ∂Xi∂Xi−1 ∥∥∥∥ ∞ ≤ max (1 + ∆t(‖Bi−1‖∞ + ‖Ci−1‖∞), ‖Bi−1‖∞ + ‖Ci−1‖∞)
≤ max (1 + ∆t(δ + 2η), δ + 2η) ≤ 1 + 3∆tr. (13)
Therefore, using (11), we have∥∥∥∥∂Xn∂Xk ∥∥∥∥ ∞ ≤ ∏ k<i≤n ∥∥∥∥ ∂Xi∂Xi−1 ∥∥∥∥ ∞ ≤ (1 + 3∆tr)n−k ≈ 1 + 3(n− k)∆tr. (14)
Note that we have used an expansion around 1 and neglected terms of O(∆t2r) as ∆t << 1. We remark that the bound (13) is the crux of our argument about gradient control as we see from the structure of the RNN that the recurrent matrices have close to unit norm. The detailed proof is presented in SM§E.3. As the entire gradient of the loss function (6), with respect to the weights and biases of the network, is bounded above in (9), the exploding gradient problem is mitigated for this RNN.
On the vanishing gradient problem. The vanishing gradient problem (Pascanu et al., 2013) arises if ∣∣∣∂E(k)n∂θ ∣∣∣, defined in (10),→ 0 exponentially fast in k, for k << n (long-term dependencies). In that case, the RNN does not have long-term memory, as the contribution of the k-th hidden state to error at time step tn is infinitesimally small. We already see from (14) that
∥∥∥∂Xn∂Xk ∥∥∥∞ ≈ 1 (independently of k). Thus, we should not expect the products in (10) to decay fast. In fact, we will provide a much more precise characterization of this gradient. To this end, we introduce the following order-notation,
β = O(α), for α, β ∈ R+ if there exists constants C,C such that Cα ≤ β ≤ Cα. M = O(α), for M ∈ Rd1×d2 , α ∈ R+ if there exists constant C such that ‖M‖ ≤ Cα. (15)
For simplicity of notation, we will also set ȳn = un ≡ 0, for all n, b = 0 and r = 1 in (8) and we will only consider θ = Wi,j for some 1 ≤ i, j ≤ m in the following proposition.
Proposition 3.3 Let yn be the hidden states generated by the RNN (4). Under the assumption that yin = O( √ tn), for all 1 ≤ i ≤ m and (8), the gradient for long-term dependencies satisfies,
∂E (k) n
∂θ = O
( ĉδ∆t 3 2 ) +O ( ĉδ(1 + δ)∆t 5 2 ) +O(∆t3), ĉ = sech2 (√ k∆t(1 + ∆t) ) , k << n.
(16)
This precise bound (16) on the gradient shows that although the gradient can be small, i.e O(∆t 32 ), it is in fact independent of k, ensuring that long-term dependencies contribute to gradients at much later steps and mitigating the vanishing gradient problem. The detailed proof is presented in SM§E.5.
Summarizing, we see that the RNN (3) indeed satisfied similar bounds to the underlying ODE (2) that resulted in upper bounds on the hidden states and its gradients. However, the lower bound on the gradient (16) is due to the specific choice of this discretization and does not appear to have a continuous analogue, making the specific choice of discretization of (2) crucial for mitigating the vanishing gradient problem.
4 EXPERIMENTS
We present results on a variety of learning tasks with coRNN (3) with n̄ = n − 1, as this version resulted in marginally better performance than the version with n̄ = n. Details of the training procedure for each experiment can be found in SM§B. We wish to clarify here that we use a straightforward hyperparameter tuning protocol based on a validation set and do not use additional performance enhancing tools, such as dropout (Srivastava et al., 2014), gradient clipping (Pascanu et al., 2013) or batch normalization (Ioffe & Szegedy, 2015), which might further improve the performance of coRNNs.
Adding problem. We start with the well-known adding problem (Hochreiter & Schmidhuber, 1997), proposed to test the ability of an RNN to learn (very) long-term dependencies. The input is a two-dimensional sequence of length T , with the first dimension consisting of random numbers drawn from U([0, 1]) and with two non-zero entries (both set to 1) in the second dimension, chosen at random locations, but one each in both halves of the sequence. The output is the sum of two numbers
0 100 200 300 400 500
Training steps (hundreds)
0.000
0.025
0.050
0.075
0.100
0.125
0.150
0.175
0.200
M S
E
T = 500
0 100 200 300 400 500
Training steps (hundreds)
0.000
0.025
0.050
0.075
0.100
0.125
0.150
0.175
0.200
M S
E
T = 2000
coRNN expRNN FastRNN anti.sym. RNN tanh RNN Baseline
0 100 200 300 400 500
Training steps (hundreds)
0.000
0.025
0.050
0.075
0.100
0.125
0.150
0.175
0.200
M S
E
T = 5000
Figure 1: Results of the adding problem for coRNN, expRNN, FastRNN, anti.sym. RNN and tanh RNN based on three different sequence lengths T , i.e. T = 500, T = 2000 and T = 5000.
of the first dimension at positions, corresponding to the two 1 entries in the second dimension. We compare the proposed coRNN to three recently proposed RNNs, which were explicitly designed to learn LTDs, namely the FastRNN (Kusupati et al., 2018), the antisymmetric (anti.sym.) RNN (Chang et al., 2019) and the expRNN (Lezcano-Casado & Martínez-Rubio, 2019), and to a plain vanilla tanh RNN, with the goal of beating the baseline mean square error (MSE) of 0.167 (which stems from the variance of the baseline output 1). All methods have 128 hidden units (dimensionality of the hidden state y) and the same training protocol is used in all cases. Fig. 1 shows the results for different lengths T of the input sequences. We can see that while the tanh RNN is not able to beat the baseline for any sequence length, the other methods successfully learn the adding task for T = 500. However, in this case, coRNN converges significantly faster and reaches a lower test MSE than other tested methods. When setting the length to the much more challenging case of T = 2000, we see that only coRNN and the expRNN beat the baseline. However, the expRNN fails to reach a desired test MSE of 0.01 within training time. In order to further demonstrate the superiority of coRNN over recently proposed RNN architectures for learning LTDs, we consider the adding problem for T = 5000 and observe that coRNN converges very quickly even in this case, while expRNN fails to consistently beat the baseline. We thus conclude that the coRNN mitigates the vanishing/exploding gradient problem even for very long sequences.
Sequential (permuted) MNIST. Sequential MNIST (sMNIST) (Le et al., 2015) is a benchmark for RNNs, in which the model is required to classify an MNIST (LeCun et al., 1998) digit one pixel at a time leading to a classification task with a sequence length of T = 784. In permuted sequential MNIST (psMNIST), a fixed random permutation is applied in order to increase the time-delay between interdependent pixels and to make the problem harder. In Table 1, we compare the test accuracy for coRNN on sMNIST and psMNIST with recently published best case results for other recurrent models, which were explicitly designed to solve long-term dependencies together with baselines corresponding to gated and unitary RNNs. To the best of our knowledge the proposed coRNN outperforms all single-layer recurrent architectures, published in the literature, for both the sMNIST and psMNIST. Moreover in Fig. 2, we present the performance (with respect to number of epochs) of different RNN architectures for psMNIST with the same fixed random permutation and the
same number of hidden units, i.e. 128. As seen from this figure, coRNN clearly outperforms the other architectures, some of which were explicitly designed to learn LTDs, handily for this permutation.
Noise padded CIFAR-10. Another challenging test problem for learning LTDs is the recently proposed noise padded CIFAR-10 experiment by Chang et al. (2019), in which CIFAR-10 data points (Krizhevsky et al., 2009) are fed to the RNN row-wise and flattened along the channels resulting in sequences of length 32. To test the long term memory, entries of uniform random numbers are added such that the resulting sequences have a length of 1000, i.e. the last 968 entries of each sequence are only noise to distract the network. Table 2 shows the result for coRNN together with other recently published best case results. We observe that coRNN readily outperforms other RNN architectures on this benchmark, while requiring only 128 hidden units.
Human activity recognition. This experiment is based on the human activity recognition data set provided by Anguita et al. (2012). The data set is a collection of tracked human activities, which were measured by an accelerometer and gyroscope on a Samsung Galaxy S3 smartphone. Six activities were binarized to obtain two merged classes {Sitting, Laying, Walking_Upstairs} and {Standing, Walking, Walking_Downstairs}, leading to the HAR-2 data set, which was first proposed in Kusupati et al. (2018). Table 3 shows the result for coRNN together with other very recently published best case results on the same data set. We can see that coRNN readily outperforms all other methods. We also ran this experiment on a tiny coRNN with very few parameters, i.e. only 1k. We can see that even in this case, the tiny coRNN beats all baselines. We thus conclude that coRNN can efficiently be used on resource-constrained IoT micro-controllers.
IMDB sentiment analysis. The IMDB data set (Maas et al., 2011) is a collection of 50k movie reviews, where 25k reviews are used for training (with 7.5k of these reviews used for validating) and 25k reviews are used for testing. The aim of this binary sentiment classification task is to decide whether a movie review is positive or negative. We follow the standard procedure by initializing the word embedding with pretrained 100d GloVe (Pennington et al., 2014) vectors and restrict the
dictionary to 25k words. Table 4 shows the results for coRNN and other recently published models, which are trained similarly and have the same number of hidden units, i.e. 128. We can see that coRNN compares favorable with gated baselines (which are known to perform very well on this task), while at the same time requiring significantly less parameters.
Further experimental results. To shed further light on the performance of coRNN, we consider the following issues. First, the theory suggested that coRNN mitigates the exploding/vanishing gradient problem as long as the assumptions (8) on the time step ∆t and weight matrices W,W hold. Clearly one can choose a suitable ∆t to enforce (8) before training, but do these assumptions remain valid during training? In SM§E.4, we argue, based on worst-case estimates, that the assumptions will remain valid for possibly a large number of training steps. More pertinently, we can verify experimentally that (8) holds during training. This is demonstrated in Fig. 3, where we show that (8) holds for all LTD tasks during training. Thus, the presented theory applies and one can expect control over hidden state gradients with coRNN. Next, we recall that the frequency parameter γ and damping parameter play a role for coRNNs (see SM§F for the theoretical dependence and Table 8 for best performing values of , γ for each numerical experiment within the range considered in Table 7). How sensitive is the performance of coRNN to the choice of these 2 parameters? To investigate this dependence, we focus on the noise padded CIFAR-10 experiment and show the results of an ablation study in Fig. 4, where the test accuracy for different coRNNs based on a two dimensional hyperparameter grid ( , γ) ∈ [0.8, 1.8]× [5.7, 17, 7] (i.e., sufficiently large intervals around the best performing values of , γ from Table 8) is plotted. We observe from the figure that although there are reductions in test accuracy for non-optimal values of ( , γ), there is no large variation and the performance is rather robust with respect to these hyperparameters. Finally, note that we follow standard practice and present best reported results with coRNN as well as other competing RNNs in order to compare the relative performance. However, it is natural to investigate the dependence of these best results on the random initial (before training) values of the weight matrices. To this end, in Table 5 of SM, we report the mean and standard deviation (over 10 retrainings) of the test accuracy with coRNN on various learning tasks and find that the mean value is comparable to the best reported value, with low standard deviations. This indicates further robustness of the performance of coRNNs.
5 DISCUSSION
Inspired by many models in physics, biology and engineering, we proposed a novel RNN architecture (3) based on a model (1) of a network of controlled forced and damped oscillators. For this RNN, we rigorously showed that under verifiable hypotheses on the time step and weight matrices, the hidden states are bounded (5) and obtained precise bounds on the gradients (Jacobians) of the hidden states, (9) and (16). Thus by design, this architecture can mitigate the exploding and vanishing gradient problem (EVGP) for RNNs. We present a series of numerical experiments that include sequential image classification, activity recognition and sentiment analysis, to demonstrate that the proposed coRNN keeps hidden states and their gradients under control, while retaining sufficient expressivity to perform complex tasks. Thus, we provide a novel and promising strategy for designing RNN architectures that are motivated by the functioning of natural systems, have rigorous bounds on hidden state gradients and are robust, accurate, straightforward to train and cheap to evaluate.
This work can be extended in different directions. For instance in this article, we have mainly focused on the learning of tasks with long-term dependencies and observed that coRNNs are comparable in performance to the best published results in the literature. Given that coRNNs are built with networks of oscillators, it is natural to expect that they will perform very well on tasks with oscillatory inputs/outputs, such as the time series analysis of high-resolution biomedical data, for instance EEG (electroencephalography) and EMG (electromyography) data and seismic activity data from geoscience. This will be pursued in a follow-up article. Similarly, applications of coRNN to language modeling will be covered in future work.
However, it is essential to point out that coRNNs might not be suitable for every learning task involving sequential inputs/outputs. As a concrete example, we consider the problem of predicting time series corresponding to a chaotic dynamical system. We recall that by construction, the underlying ODE (2) (and the discretization (3)) do not allow for super-linear (in time) separation of trajectories for nearby inputs. Thus, we cannot expect that coRNNs will be effective at predicting chaotic time series and it is indeed investigated and demonstrated for a Lorenz-96 ODE in SM§A, where we observe that the coRNN is outperformed by LSTMs in the chaotic regime.
Our main theoretical focus in this paper was to demonstrate the possible mitigation of the exploding and vanishing gradient problem. On the other hand, we only provided some heuristics and numerical evidence on why the proposed RNN still has sufficient expressivity. A priori, it is natural to think that the proposed RNN architecture might introduce a strong bias towards oscillatory functions. However, as we argue in SM§C, the proposed coRNN can be significantly more expressive, as the damping, forcing and coupling of several oscillators modulates nonlinear response to yield a very rich and diverse set of output states. This is also evidenced by the ability of coRNNs to deal with many tasks in our numerical experiments, which do not have an explicit oscillatory structure. This sets the stage for a rigorous investigation of universality of the proposed coRNN architecture, as in the case of echo state networks in Grigoryeva & Ortega (2018). A possible approach would be to leverage the ability of the proposed RNN to convert general inputs into a rich set of superpositions of harmonics (oscillatory wave forms). Moreover, the proposed RNN was based on the simplest model of coupled oscillators (1). Much more detailed models of oscillators are available, particularly those that arise in the modeling of biological neurons, Stiefel & Ermentrout (2016) and references therein. An interesting variant of our proposed RNN would be to base the RNN architecture on these more elaborate models, resulting in analogues of the spiking neurons model of Maass (2001) for RNNs.
Supplementary Material for:
Coupled Oscillatory Recurrent Neural Network (coRNN): An accurate and (gradient) stable
architecture for learning long time dependencies
A CHAOTIC TIME-SERIES PREDICTION.
According to proposition E.1, coRNN does not exhibit chaotic behavior by design. While this property is highly desirable for learning long-term dependencies (a slight perturbation of the input should not result in an unbounded perturbation of the prediction), it impairs the performance on tasks, where the network has to learn actual chaotic dynamics. To test this numerically, we consider the following version of the Lorenz 96 system: (Lorenz, 1996):
x′j = (xi+1 − xi−2)xi−1 − xi + F, (17)
where xj ∈ R for all j = 1, . . . , 5 and F is an external force controlling the level of chaos in the system. Fig. 5 shows a trajectory of the system (17) plotted on the x1x2-plane for a small external
force of F = 0.9 as well as a trajectory for a large external force of F = 8. We can see that while for F = 0.9 the system does not exhibit chaotic behavior, the dynamics for F = 8 is already highly chaotic.
Our task consists of predicting the 25-th next state of a trajectory of the system (17). We provide 128 trajectories of length 2000 for each of the training, validation and test sets. The trajectories are generated by numerically solving the system (17) and evaluating it at 2000 equidistantly distributed discrete time points with distance 0.01. The initial value for each trajectory is chosen uniform at random on [F − 1/2, F + 1/2]5 around the equilibrium point (F, . . . , F ) of the system (17). Since LSTMs are known to be able to produce chaotic dynamics, even in the autonomous (zero-entry) case (Laurent & von Brecht, 2017), we expect them to perform significantly better than coRNN if the underlying system exhibits strong chaotic behavior. Table 6 shows the normalized root mean square error (NRMSE) (RMSE divided by the root mean square of the target trajectory) on the test set for coRNN and LSTM. We can see that indeed for the non-chaotic case of using an external force of F = 0.9 LSTM and coRNN perform similarly. However, when the dynamics get chaotic (in this case using an external force of F = 8), the LSTM clearly outperforms coRNN.
B TRAINING DETAILS
The IMDB task was conducted on an NVIDIA GeForce GTX 1080 Ti GPU, while all other experiments were run on a Intel Xeon E3-1585Lv5 CPU. The weights and biases of coRNN are randomly initialized according to U(− 1√nin , 1√ nin
), where nin denotes the input dimension of each affine transformation. Instead of treating the parameters ∆t, γ and as fixed hyperparameters, we can also treat them as trainable network parameters by constraining ∆t to [0, 1] by using a sigmoidal activation function and , γ > 0 by the use of ReLU for instance. However, in this case no major difference in performance is obtained. The hyperparameters are optimized with a random search algorithm, where the results of the best performing coRNN (based on the validation set) are reported. The ranges of the hyperparameters for the random search algorithm are provided in Table 7. Table 8 shows the rounded hyperparameters of the best performing coRNN architecture resulting from the random search algorithm for each learning task. We used 100 training epochs for sMNIST, psMNIST and noise padded CIFAR-10 with additional 20 epochs in which the learning rate was reduced by a factor of 10. Additionally, we used 100 epochs for the IMDB task and 250 epochs for the HAR-2 task.
C HEURISTICS OF NETWORK FUNCTION
At the level of a single neuron, the dynamics of the RNN is relatively straightforward. We start with the scalar case, i.e. m = d = 1 and illustrate different hidden states y as a function of time, for different input signals, in Fig. 6. In this figure, we consider two different input signals, one oscillatory signal given by u(t) = cos(4t) and another is a combination of step functions. First, we plot the solution y(t) of (1), with the parameters V,b,W,W, = 0 and γ = 1. This simply corresponds to the case of a simple harmonic oscillator (SHO) and the solution is described by a sine wave with the natural frequency of the oscillator. Next, we introduce forcing by the input signal by setting V = 1 and the activation function is the identity σ(x) = x, leading to a forced damped oscillator (FDO). As seen from Fig. 6, in the case of an oscillatory signal, this leads to a very minor change over the SHO,
whereas for the step function, the change is only in the amplitude of the wave. Next, we add damping by setting = 0.25 and see that the resulting forced damped oscillator (FDO), merely damps the amplitude of the waves, without changing their frequency. Then, we consider the case of controlled oscillator (CFDO) by setting W = −2,V = 2,b = 0.25,W = 0.75. As seen from Fig. 6, this leads to a significant change in the wave form in both cases. For the oscillatory input, the output is now a superposition of many different forms, with different amplitudes and frequencies (phases) whereas for the step function input, the phase is shifted. Already, we can see that for a linear controlled oscillator, the output can be very complicated with the superposition of different waves. This holds true when the activation function is set to σ(x) = tanh(x) (which is our proposed coRNN). For both inputs, the output is a modulated version of the one generated by CFDO, expressed as a superposition of waves. On the other hand, we also plot the solution with a Duffing type oscillator (DUFF) by setting the activation function as,
σ(x) = x− x 3
3 . (18)
In this case, the solution is very different from the CFDO and coRNN solutions and is heavily damped (either in the output or its derivative). On the other hand, given the chaotic nature of the dynamical system in this case, a slight change in the parameters led to the output blowing up. Thus, a bounded nonlinearity seems essential in this context.
Coupling neurons together further accentuates this generation of superpositions of different waveforms, as seen even with the simplest case of a network with two neurons, shown in Fig. 6 (Bottom row). For this figure, we consider two neurons, i.e m = 2 and two different network topologies. For the first, we only allow the first neuron to influence the second one and not vice versa. This is enforced with the weight matrices,
W = [ −2 0 3 −2 ] , W = [ 0.75 0 −1 0.75 ] .
We also set V = [2, 2]>,b = [0.25, 0.25]>. Note that in this case (we name as ORD (for ordered connections)), the output of the first neuron should be exactly the same as in the uncoupled (UC) case, whereas there is a distinct change in the output of the second neuron and we see that the first neuron has modulated a sharp change in the resulting output wave form. It is well illustrated by the emergence of an approximation to the step function (Bottom Right of Fig. 6), even though the input signal is oscillatory.
Next, we consider the case of fully connected (FC) neurons by setting the weight matrices as,
W = [ −2 1 3 −2 ] , W = [ 0.75 0.3 −1 0.75 ] .
The resulting outputs for the first neuron are now slightly different from the uncoupled case. On the the other hand, the approximation of step function output for the second neuron is further accentuated.
Even these simple examples illustrate the functioning of a network of controlled oscillators well. The input signal is converted into a superposition of waves with different frequencies and amplitudes, with these quantities being controlled by the weights and biases in (1). Thus, very complicated outputs can be generated by modulating the number, frequencies and amplitudes of the waves. In practice, a network of a large number of neurons is used and can lead to extremely rich global dynamics, along the lines of emergence of synchronization or bistable heterogeneous behavior seen in systems of idealized oscillators and explained by their mean field limit, see H. Sakaguchi & Kuramoto (1987); Winfree (1967); Strogatz (2001). Thus, we argue that the ability of the network of (forced, driven) oscillators to access a very rich set of output states can lead to high expressivity of the system. The training process selects the weights that modulate frequencies, phases and amplitudes of individual neurons and their interaction to guide the system to its target output.
D BOUNDS ON THE DYNAMICS OF THE ORDINARY DIFFERENTIAL EQUATION
(1)
In this section, we present bounds that show how the continuous time dynamics of the ordinary differential equation (2), modeling non-linear damped and forced networks of oscillators, is constrained. We start with the following estimate on the energy of the solutions of the system (2).
Proposition D.1 Let y(t), z(t) be the solutions of the ODE system (2) at any time t ∈ [0, T ] and assume that the damping parameter ≥ 12 and the initial data for (2) is given by, y(0) = z(0) ≡ 0. Then, the solutions are bounded as,
y(t)>y(t) ≤ mt γ , z(t)>z(t) ≤ mt, ∀t ∈ (0, T ]. (19)
To prove this proposition, we multiply the first equation in (2) with y(t)> and the second equation in (2) with 1γ z(t) > to obtain,
d
dt
( y(t)>y(t)
2 +
z(t)>z(t)
2γ
) = z(t)>σ(A(t))
γ − γ z(t)>z(t), (20)
with A(t) = Wy(t) + Wz(t) + Vu(t) + b.
Using the elementary Cauchy’s inequality repeatedly in (20) results in,
d
dt
( y(t)>y(t)
2 +
z(t)>z(t)
2γ
) ≤ σ(A) >σ(A)
2γ +
1
γ
( 1 2 − ) z>z
≤ m 2γ (as |σ| ≤ 1 and ≥ 1 2 ).
Integrating the above inequality over the time interval [0, t] and using the fact that the initial data are y(0) = z(0) ≡ 0, we obtain the bounds (19). The above proposition and estimate (19) clearly demonstrate that the dynamics of the network of coupled non-linear oscillators (1) is bounded. The fact that the nonlinear activation function σ = tanh is uniformly bounded in its arguments played a crucial role in deriving the energy bound (19). A straightforward adaptation of this argument leads to the following proposition about the sensitivity of the system to inputs,
Proposition D.2 Let y(t), z(t) be the solutions of the ODE system (2) with respect to the input signal u(t). Let ȳ(t), z̄(t) be the solutions of the ODE system (2), but with respect to the input signal ū(t). Assume that the damping parameter ≥ 12 and the initial data are given by,
y(0) = z(0) = ȳ(0) = z̄(0) ≡ 0. Then we have the following bound,
(y(t)− ȳ(t))> (y(t)− ȳ(t)) ≤ 4mt γ , (z(t)− z̄(t))> (z(t)− z̄(t)) ≤ 4mt, ∀t ∈ (0, T ].
(21)
Thus from the bound (21), there can be atmost linear separation (in time) with respect to the trajectories of the ODE (2) for different input signals. Hence, chaotic behavior, which is characterized by the (super-)exponential separation of trajectories is ruled out by the structure of the ODE system (2). Note that this property of the ODE system was primarily a result of the uniform boundedness of the activation function σ. Using a different activation function such as ReLU might enable to obtain an exponential separation of trajectories that is a prerequisite for a chaotic dynamical system.
D.1 GRADIENT DYNAMICS FOR THE ODE SYSTEM (2)
Let θ denote the i, j-th entry of the Weight matrices W,W,V or the i-th entry of the bias vector b. We are interested in finding out how the gradients of the hidden state y (and the auxiliary hidden state z) with respect to parameter θ, vary with time. Note that these gradients are precisely the objects of interest in the training of an RNN, based on a discretization of the ODE system (2). To this end, we differentiate (2) with respect to the parameter θ and denote
yθ(t) = ∂y
∂θ (t), zθ(t) =
∂z ∂θ (t),
to obtain, y′θ = zθ,
z′θ = diag(σ ′(A)) [Wyθ + Wzθ] + Z i,j m,m̄(A)ρ− γyθ − zθ.
(22)
As introduced before, Zi,jm,m̄(A) ∈ Rm×m̄ is a matrix with all elements are zero except for the (i, j)-th entry which is set to σ′(A(t))i, i.e. the i-th entry of σ′(A), and we have,
ρ = y, m̄ = m, if θ = Wi,j ,
ρ = z, m̄ = m, if θ = Wi,j ,
ρ = u, m̄ = d, if θ = Vi,j ,
ρ = 1, m̄ = 1, if θ = bi.
We see from (22) that the ODEs governing the gradients with respect to the parameter θ also represent a system of oscillators but with additional coupling and forcing terms, proportional to the hidden states y, z or input signal u. As we have already proved with estimate (19) that the hidden states are always bounded and the input signal is assumed to be bounded, it is natural to expect that the gradients of the states with respect to θ are also bounded. We make this statement explicit in the following proposition, which for simplicity of exposition, we consider the case of θ = Wi,j , as the other values of θ are very similar in their behavior.
Proposition D.3 Let θ = Wi,j and y, z be the solutions of the ODE system (2). Assume that the weights and the damping parameter satisfy,
‖W‖∞ + ‖W‖∞ ≤ , then we have the following bounds on the gradients,
yθ(t) >yθ(t) +
1
γ
( zθ(t) >zθ(t) ) ≤ [ yθ(0) >yθ(0) + 1
γ
( zθ(0) >zθ(0) )] eCt + mt2
2γ2 , t ∈ (0, T ],
C = max {‖W‖1 γ , 1 + ‖W‖1 } .
(23)
The proof of this proposition follows exactly along the same lines as the proof of proposition D.1 and we skip the details, while noting the crucial role played by the energy bound (19).
We remark that the bound (23) indicates that as long as the initial gradients with respect to θ are bounded and the weights are controlled by the damping parameter, the hidden state gradients remain bounded in time.
E SUPPLEMENT TO THE RIGOROUS ANALYSIS OF CORNN
In this section, we supplement the section on the rigorous analysis of the proposed RNN (4). We start with
E.1 PROOF OF PROPOSITION 3.1
We multiply (y>n−1, z > n ) to (3) and use the elementary identities,
a>(a− b) = a >a
2 − b
>b 2 + 1 2 (a− b)>(a− b), b>(a− b) = a >a 2 − b >b 2 − 1 2 (a− b)>(a− b),
to obtain the following,
y>n yn + z > n zn
2 =
y>n−1yn−1 + z > n−1zn−1
2 + (yn − yn−1)>(yn − yn−1) 2
− (zn − zn−1) >(zn − zn−1) 2 + ∆tz>n σ(An−1)−∆tz>n zn ≤ y > n−1yn−1 + z > n−1zn−1
2 + ∆t (1/2 + ∆t/2− 1) z>n zn +
∆t
2 σ>(An−1)σ(An−1)
≤ y > n−1yn−1 + z > n−1zn−1
2 + m∆t 2 as σ2 ≤ 1 and > ∆t << 1.
Iterating the above inequality n times leads to the energy bound,
y>n yn + z > n zn ≤ y>0 y0 + z>0 z0 + nm∆t = mtn, (24)
as y0 = z0 = 0.
E.2 SENSITIVITY TO INPUTS
Next, we examine how changes in the input signal u affect the dynamics. We have the following proposition:
Proposition E.1 Let yn, zn be the hidden states of the trained RNN (4) with respect to the input u = {un}Nn=1 and let yn, zn be the hidden states of the same RNN (4), but with respect to the input u = {un}Nn=1, then the differences in the hidden states are bounded by,
(yn − yn)> (yn − yn) + (zn − zn)> (zn − zn) ≤ 4mtn. (25)
The proof of this proposition is completely analogous to the proof of proposition 3.1, we subtract
yn = yn−1 + ∆tzn, zn = zn−1 1+∆t + ∆t 1+∆tσ(An−1)− ∆t1+∆tyn−1, An−1 := Wyn−1 + Wzn−1 + Vun + b.
(26) from (4) and multiply ( (yn − yn)> , (zn − zn)> ) to the difference. The estimate (25) follows
identically to the proof of (5) (presented above) by realizing that σ(An−1)− σ(An−1) ≤ 2. Note that the bound (25) ensures that the hidden states can only separate linearly in time for changes in the input. Thus, chaotic behavior, such as for Duffing type oscillators, characterized by at least exponential separation of trajectories, is ruled out for this proposed RNN, showing that it is stable with respect to changes in the input. This is largely on account of the fact that the activation function σ in (3) is globally bounded.
E.3 PROOF OF PROPOSITION 3.2
From (6), we readily calculate that,
∂En ∂Xn = [yn − ȳn, 0] . (27)
Similarly from (3), we calculate,
∂+Xk ∂θ = [( ∆t2 1+∆tZ i,j m,m(Ak−1)yk−1 )> , ( ∆t 1+∆tZ i,j m,m(Ak−1)yk−1 )>]> if θ = (i, j)−th entry of W,[( ∆t2 1+∆tZ i,j m,m(Ak−1)zk−1 )> , ( ∆t 1+∆tZ i,j m,m(Ak−1)zk−1 )>]> if θ = (i, j)−th entry of W,[( ∆t2 1+∆tZ i,j m,d(Ak−1)uk )> , ( ∆t 1+∆tZ i,j m,d(Ak−1)uk )>]> if θ = (i, j)−th entry of V,[( ∆t2
1+∆tZ i,1 m,1(Ak−1)
)> , (
∆t 1+∆tZ i,1 m,1(Ak−1)
)>]> if θ = i−th entry of b,
(28) where Zi,jm,m̄(Ak−1) ∈ Rm×m̄ is a matrix with all elements are zero except for the (i, j)-th entry which is set to σ′(Ak−1)i, i.e. the i-th entry of σ′(Ak−1). We easily see that ‖Zi,jm,m̄(Ak−1)‖∞ ≤ 1 for all i, j,m, m̄ and all choices of Ak−1.
Now, using definitions of matrix and vector norms and applying (14) in (10), together with (27) and (28), we obtain the following estimate on the norm:
∣∣∣∣∣∂E(k)n∂θ ∣∣∣∣∣ ≤ (‖yn‖∞ + ‖ȳn‖∞)(1 + 3(n− k)∆tr)δ∆t‖yk−1‖∞, if θ is entry of W, (‖yn‖∞ + ‖ȳn‖∞)(1 + 3(n− k)∆tr)δ∆t‖zk−1‖∞, if θ is entry of W, (‖yn‖∞ + ‖ȳn‖∞)(1 + 3(n− k)∆tr)δ∆t‖uk‖∞, if θ is entry of V, (‖yn‖∞ + ‖ȳn‖∞)(1 + 3(n− k)∆tr)δ∆t, if θ is entry of b. (29)
We will estimate the above term, just for the case of θ is an entry of W, the rest of the terms are very similar to estimate.
For simplicity of notation, we let k − 1 ≈ k and aim to estimate the term,∣∣∣∣∣∂E(k)n∂θ ∣∣∣∣∣ ≤ ‖yn‖∞‖yk‖∞(1 + 3(n− k)∆tr)δ∆t+ ‖ȳn‖∞‖yk‖∞(1 + 3(n− k)∆tr)δ∆t ≤ m √ nk∆t(1 + 3(n− k)∆tr)δ∆t+ ‖ȳn‖∞ √ mk √
∆t(1 + 3(n− k)∆tr)δ∆t (by (5)) ≤ m √ nkδ∆t2 + 3m √ nk(n− k)δ∆tr+2 + ‖ȳn‖∞ √ mk √
∆t(1 + 3(n− k)∆tr)δ∆t. (30)
To further analyze the above estimate, we recall that n∆t = tn ≤ 1 and consider two different regimes. Let us start by considering short-term dependencies by letting k ≈ n, i.e n− k = c with constant c ∼ O(1), independent of n, k. In this case, a straightforward application of the above assumptions in the bound (30) yields,∣∣∣∣∣∂E(k)n∂θ
∣∣∣∣∣ ≤ m√nkδ∆t2 + 3m√nk(n− k)δ∆tr+2 + ‖ȳn‖∞√m√tnδ∆t+ ‖ȳn‖∞√m√tncδ∆tr+1 ≤ mtnδ∆t+mctnδ∆tr+1 + ‖ȳn‖∞ √ m √ tnδ∆t+ ‖ȳn‖∞ √ m √ tncδ∆t r+1
≤ tnmδ∆t+ ‖ȳn‖∞ √ m √ tnδ∆t (for ∆t << 1 as r ≥ 1/2) ≤ mδ∆t+ ‖ȳn‖∞ √ mδ∆t.
(31)
Next, we consider long-term dependencies by setting k << n and estimating,∣∣∣∣∣∂E(k)n∂θ ∣∣∣∣∣ ≤ m√nkδ∆t2 + 3m√nk(n− k)δ∆tr+2 + ‖ȳn‖∞√mδ∆t 32 + 3‖ȳn‖∞√mnδ∆tr+ 32 ≤ m √ tnδ∆t 3 2 + 3mt 3 2 n δ∆t r+ 12 + ‖ȳn‖∞ √ mδ∆t 3 2 + 3‖ȳn‖∞ √ mtnδ∆t r+ 12
≤ mδ∆t 32 + 3mδ∆tr+ 12 + ‖ȳn‖∞ √ mδ∆t 3 2 + 3‖ȳn‖∞ √ mδ∆tr+ 1 2 (as tn < 1) ≤ 3mδ∆tr+ 12 + 3‖ȳn‖∞ √ mδ∆tr+ 1 2 (as r ≤ 1 and ∆t << 1).
(32) Thus, in all cases, we have that,∣∣∣∣∣∂E(k)n∂θ ∣∣∣∣∣ ≤ 3δ∆t (m+√m‖ȳn‖∞) (as r ≥ 1/2). (33) Applying the above estimate in (10) allows us to bound the gradient by,∣∣∣∣∂En∂θ ∣∣∣∣ ≤ ∑ 1≤k≤n ∣∣∣∣∣∂E(k)n∂θ ∣∣∣∣∣ ≤ 3δtn (m+√m‖ȳn‖∞) . (34)
Therefore, the gradient of the loss function (6) can be bounded as,∣∣∣∣∂E∂θ ∣∣∣∣ ≤ 1N N∑ n=1 ∣∣∣∣∂En∂θ ∣∣∣∣
≤ 3δ [ m∆t
N N∑ n=1 n+ √ m∆t N N∑ n=1
‖ȳn‖∞n ]
≤ 3δ [ m∆t
N N∑ n=1 n+ √ mȲ∆t N N∑ n=1 n
]
≤ 3 2 δ(N + 1)∆t
( m+ Ȳ √ m )
≤ 3 2 δ(tN + ∆t)
( m+ Ȳ √ m )
≤ 3 2 δ(1 + ∆t)
( m+ Ȳ √ m )
(as tN = 1)
≤ 3 2
( m+ Ȳ √ m ) ,
(35)
which is the desired estimate (9).
E.4 ON THE ASSUMPTION (8) AND TRAINING
Note that all the estimates were based on the fact that we were able to choose a time step ∆t in (3) that enforces the condition (8). For any fixed weights W,W, we can indeed choose such a value of to satisfy (8). However, we train the RNN to find the weights that minimize the loss function (6). Can we find a hyperparameter ∆t such that (8) is satisfied at every step of the stochastic gradient descent method for training?
To investigate this issue, we consider a simple gradient descent method of the form:
θ`+1 = θ` − ζ ∂E
∂θ (θ`). (36)
Note that ζ is the constant (non-adapted) learning rate. We assume for simplicity that θ0 = 0 (other choices lead to the addition of a constant). Then, a straightforward estimate on the weight is given by,
|θ`+1| ≤ |θ`|+ ζ ∣∣∣∣∂E∂θ (θ`) ∣∣∣∣ ≤ |θ`|+ ζ 3
2
( m+ Ȳ √ m ) (by (35))
≤ |θ0|+ `ζ 3
2
( m+ Ȳ √ m ) = `ζ 3
2
( m+ Ȳ √ m ) .
(37)
In order to calculate the minimum number of steps L in the gradient descent method (36) such that the condition (8) is satisfied, we set ` = L in (37) and applying it to the condition (8) leads to the straightforward estimate,
L ≥ 1 ζ 32 ( m+ Ȳ √ m ) m∆t1−rδ . (38)
Note that the parameter δ < 1, while in general, the learning rate ζ << 1. Thus, as long as r ≤ 1, we see that the assumption (8) holds for a large number of steps of the gradient descent method. We remark that the above estimate (38) is a large underestimate on L. In the experiments presented in this article, we are able to take a very large number of training steps, while the gradients remain within a range (see Fig. 3).
E.5 PROOF OF PROPOSITION 3.3
We start with the following decomposition of the recurrent matrices:
∂Xi ∂Xi−1 = Mi−1 + ∆tM̃i−1,
Mi−1 :=
[ I ∆tCi−1
Bi−1 Ci−1
] , M̃i−1 := [ Bi−1 0
0 0
] ,
with B,C defined in (12). By the assumption (8), one can readily check that ‖M̃i−1‖∞ ≤ ∆t, for all k ≤ i ≤ n− 1. We will use an induction argument to show the following representation formula for the product of Jacobians,
∂Xn ∂Xk
= ∏
k<i≤n
∂Xi ∂Xi−1 = I ∆t n−1∑ j=k k∏ i=j Ci
Bn−1 + k∑
j=n−2 ( j+1∏ i=n−1 Ci ) Bj
k∏ i=n−1 Ci +O(∆t). (39) We start by the outermost product and calculate,
∂Xn ∂Xn−1 ∂Xn−1 ∂Xn−2
= ( Mn−1 + ∆tM̃n−1 )( Mn−2 + ∆tM̃n−2 ) = Mn−1Mn−2 + ∆t(M̃n−1Mn−2 +Mn−1M̃n−2) +O(∆t2).
By direct multiplication, we obtain,
Mn−1Mn−2 =
[ I ∆t (Cn−2 + Cn−1Cn−2)
Bn−1 + Cn−1Bn−2 Cn−1Cn−2 ] + ∆t [ Cn−1Bn−2 0
0 Bn−1Cn−2
] .
Using the definitions in (12) and (8), we can easily see that[ Cn−1Bn−2 0
0 Bn−1Cn−2
] = O(∆t).
Similarly, it is easy to show that
M̃n−1Mn−2,Mn−1M̃n−2 ∼ O(∆t).
Plugging all the above estimates yields,
∂Xn ∂Xn−1 ∂Xn−1 ∂Xn−2 =
[ I ∆t (Cn−2 + Cn−1Cn−2)
Bn−1 + Cn−1Bn−2 Cn−1Cn−2
] +O(∆t2),
which is exactly the form of the leading term (39).
Iterating the above calculations (n− k) times and realizing that (n− k)∆t2 ≈ n∆t2 = tn∆t yields the formula (39).
Recall that we have set θ = Wi,j , for some 1 ≤ i, j ≤ m in proposition 3.3. Directly calculating with (27), (28) and the representation formula (39) yields the formula,
∂E (k) n
∂θ = y>n∆t 2δZi,jm,m(Ak−1)yk−1 + y > n∆t 2δC∗Zi,jm,m(Ak−1)yk−1 +O(∆t3), (40)
with matrix C∗ defined as,
C∗ := n−1∑ j=k k∏ i=j Ci,
and Zi,jm,m(Ak−1) ∈ Rm×m is a matrix with all elements are zero except for the (i, j)-th entry which is set to σ′(aik−1), i.e. the i-th entry of σ ′(Ak−1).
Note that the formula (40) can be explicitly written as,
∂E (k) n
∂θ = δ∆t2σ′(aik−1)y i ny j k−1 + δ∆t 2σ′(aik−1) m∑ `=1 C∗`iy ` ny j k−1 +O(∆t3), (41)
with yjn denoting the j-th element of vector yn, and
aik−1 := m∑ `=1 Wi`y ` k−1 + m∑ `=1 Wi`z ` k−1. (42)
By the assumption (8), we can readily see that
‖W‖∞, ‖W‖∞ ≤ 1 + ∆t. Therefore by the fact that σ′ = sech2, the assumption yik = O( √ tk) and (42), we obtain,
ĉ = sech2( √ k∆t(1 + ∆t) ≤ σ′(ak−1i ) ≤ 1. (43)
Using (43) in (41), we obtain,
δ∆t2σ′(aik−1)y i ny j k−1 = O
( ĉδ∆t 5 2 ) . (44)
Using the definition of Ci, we can expand the product in C∗ and neglect terms of order O(∆t4), to obtain
k∏ i=j Ci = (O(1) +O((j − k + 1)δ∆t2))I.
Summing over j and using the fact that k << n, we obtain that
C∗ = (O(n) +O(δ∆t0))I. (45) Plugging (45) and (43) into (41) leads to,
δ∆t2σ′(aik−1) m∑ `=1 C∗`iy ` ny j k−1 = O ( ĉδ∆t 3 2 ) +O ( ĉδ2∆t 5 2 ) . (46)
Combining (44) and (46) yields the desired estimate (16).
Remark. A careful examination of the above proof reveals that the constants hidden in the prefactors of the leading term O ( ĉδ∆t 3 2 ) of (16) stem from the formula (46). Here, we have used the assumption that yik = O( √ tk). Note that this assumption implicitly assumes that the energy bound (5) is equidistributed among all the elements of the vector yk and results in the obfuscation of the constants in the leading term of (16). Given that the energy bound (5) is too coarse to allow for precise upper and lower bounds on each individual element of the hidden state vector yk, we do not see any other way of, in general, determining the distribution of energy among individual entries of the hidden state vector. Thus, assuming equidistribution seems reasonable. On the other hand, in practice, one has access to all the terms in formula (46) for each numerical experiment and if one is interested, then one can directly evaluate the precise bound on the leading term of the formula (16).
F RIGOROUS ESTIMATES FOR THE RNN (3) WITH n̄ = n− 1 AND GENERAL VALUES OF , γ
In this section, we will provide rigorous estimates, similar to that of propositions 3.1, E.1 and 3.2 for the version of coRNN (3) that results by setting n̄ = n− 1 in (3) leading to,
yn = yn−1 + ∆tzn, zn = zn−1 + ∆tσ (Wyn−1 + Wzn−1 + Vun + b)−∆tγyn−1 −∆t zn−1. (47)
Note that (47) can be equivalently written as, yn = yn−1 + ∆tzn,
zn = (1− ∆t) zn−1 + ∆tσ (Wyn−1 + Wzn−1 + Vun + b)−∆tγyn−1. (48)
We will also consider the case of non-unit values of the control parameters γ and below.
Bounds on Hidden states. We start the following bound on the hidden states of (47),
Proposition F.1 Let the damping parameter > 12 and the time step ∆t in the RNN (47) satisfy the following condition,
∆t < 2 − 1 γ + 2 . (49)
Let yn, zn be the hidden states of the RNN (47) for 1 ≤ n ≤ N , then the hidden states satisfy the following (energy) bounds:
y>n yn + 1
γ z>n zn ≤ mtn γ . (50)
We set An−1 = Wyn−1 +Wzn−1 + Vun−1 + b and as in the proof of proposition 3.1, we multiply (y>n−1, 1 γ z > n ) to (47) and use elementary identities and rearrange terms to obtain,
y>n yn 2 + z>n zn 2γ = y>n−1yn−1 2 + z>n−1zn−1 2γ + (yn − yn−1)>(yn − yn−1) 2
− (zn − zn−1) >(zn − zn−1) 2γ
+ ∆t
γ z>n σ(An−1)−
∆t γ z>n zn + ∆t γ z>n (zn − zn−1) .
We use a rescaled version of the well-known Cauchy’s inequality
ab ≤ ca 2 2 + b2 2c ,
for a constant c > 0 to be determined, to rewrite the above identity as, y>n yn
2 + z>n zn 2γ ≤ y > n−1yn−1 2 + z>n−1zn−1 2γ + (yn − yn−1)>(yn − yn−1) 2
+
( ∆t
2cγ − 1 2γ
) (zn − zn−1)>(zn − zn−1) + ∆t
2γ σ(An−1)
>σ(An−1)
+
( ∆t
2γ + c ∆t 2γ − ∆t γ
) z>n zn.
Using the first equation in (47), the above inequality reduces to,
y>n yn 2 + z>n zn 2γ ≤ y > n−1yn−1 2 + z>n−1zn−1 2γ
+
( ∆t
2cγ − 1 2γ
) (zn − zn−1)>(zn − zn−1) + ∆t
2γ σ(An−1)
>σ(An−1)
+
( ∆t2
2 +
∆t 2γ + c ∆t 2γ − ∆t γ
) z>n zn.
As long as,
∆t ≤ min ( c ,
(2− c) − 1 γ
) , (51)
we can easily check that,
y>n yn 2 + z>n zn 2γ ≤ y > n−1yn−1 2 + z>n−1zn−1 2γ + ∆t 2γ σ(An−1) >σ(An−1)
≤ y > n−1yn−1
2 +
z>n−1zn−1
2γ + m∆t 2γ (σ ≤ 1).
Iterating the above bound till n = 0 and using the zero initial data yields the desired (50) as long as we find a c such that the condition (51) is satisfied. To do so, we equalize the two terms on the right hand side of (51) to obtain,
c = (2 − 1) γ + 2 .
From the assumption (49) and the fact that > 12 , we see that such a c > 0 always exists for any value of γ > 0 and (51) is satisfied, which completes the proof.
We remark that the same bound on the hidden states is obtained for both versions of coRNN, i.e. (3) with n̄ = n and (47). However, the difference lies in the constraint on the time step ∆t. In contrast to (49), a careful examination of the proof of proposition 3.1 reveals that the condition on the time step for the stability of (3) with n̄ = n is given by,
∆t < 2 − 1 γ , (52)
and is clearly less stringent than the condition (51) for the stability of (47). For instance, in the prototypical case of γ = = 1, the stability of (3) with n̄ = n is ensured for any ∆t < 1. On the other hand, the stability of (47) is ensured as long as ∆t < 12 . However, it is essential to recall that these conditions are only sufficient to ensure stability and are by no means necessary. Thus in practice, the coRNN version (47) is found to be stable in the same range of time steps as the version (3) with n̄ = n.
On the exploding and vanishing gradient problems for coRNN (47) Next, we have the following upper bound on the hidden state gradients for the version (47) of coRNN,
Proposition F.2 Let yn, zn be the hidden states generated by the RNN (47). We assume that the damping parameter > 12 and the time step ∆t can be chosen such that in addition to (51) it also satisfies,
max {∆t(γ + ‖W‖∞),∆t‖W‖∞} = η ≤ C̃∆tr, 1
2 ≤ r ≤ 1, (53)
and with the constant C̃ independent of the other parameters of the RNN (47). Then the gradient of the loss function E (6) with respect to any parameter θ ∈ Θ is bounded as,∣∣∣∣∂E∂θ ∣∣∣∣ ≤ 3(C̃) ( m+ Ȳ √ m ) 2γ , (54)
with the constant C̃, defined in (53) and Ȳ = max 1≤n≤N ‖ȳn‖∞ be a bound on the underlying training data
The proof of this proposition is completely analogous to the proof of proposition 3.2 and we omit the details here.
Note that the bound (54) enforces that hidden state gradients cannot explode for version (47) of coRNN. A similar statement for the vanishing gradient problem is inferred from the proposition below.
Proposition F.3 Let yn be the hidden states generated by the RNN (47). Under the assumption that yin = O( √ tn γ ), for all 1 ≤ i ≤ m and (53), the gradient for long-term dependencies satisfies,
∂E (k) n
∂θ = O
( ĉ
γ ∆t
3 2 ) +O ( ĉ
γ δ(1 + δ)∆t
5 2 ) +O(∆t3), ĉ = sech2 (√ k∆t(1 + ∆t) ) k << n.
(55)
The proof is a repetition of the steps of the proof of proposition 3.3, with suitable modifications for the structure of the RNN and non-unit , γ and we omit the tedious calculations here. Note that (55) rules out the vanishing gradient problem for the coRNN version (47). | 1. What is the focus of the paper, and what are its contributions to the field?
2. What are the strengths of the paper, particularly regarding its theoretical analysis and experimental validation?
3. Are there any concerns or limitations regarding the proposed model, such as its ability to handle complex tasks or its comparability to other state-of-the-art models?
4. How does the reviewer assess the clarity, quality, originality, and reproducibility of the paper's content?
5. What are some potential applications of the proposed coRNN model in real-world scenarios, such as biomedical signal processing or natural language processing? | Review | Review
Firstly, this paper conducts the rigorous analysis of the coRNN via the formula deduction to verify the bound. Then the coRNN is proved to mitigate the exploding and vanishing gradient problem and this is also validated in a series of experiments. Also, the performance of the coRNN is comparable or better compared to state-of-the-art models. This paper provides a new idea to address the exploding and vanishing gradient problem, which hinders the development of deeper neural networks tremendously. In my opinion, this coRNN model is meaningful for practical application, especially for the extension of more complicated neural networks.
Besides, for the biomedical signals with high temporal resolution (e.g., electroencephalogram, electromyogram), the coRNN model can be a good alternative in future work. Furthermore, the efficiency of the proposed model also be proved by the mathematic formulation and experiments ranging from pure synthetic tasks designed to learn long-term dependencies to more realistic tasks rigorously. Considering the whole structure of this paper, I argue that the clarity is clear and logical. Different from the recently published literature, this paper has explicit use of networks of oscillators with the underlying biological motivation, so this paper expresses the originality in some extent. To sum up, the quality of this paper is suitable for the publication in ICLR2021.
There are two main pros in this paper: 1. The theoretical verification is clear and rigorous, readers can easily catch good understanding of the bounds this paper proves following the formula deduction. Specifically, this paper demonstrates how to avoid the exploding and vanishing gradient problem for the RNN in theory.
2. The experiments are quite abundant, experimental results show that the coRNN can not only avoid the exploding and vanishing gradient problem, but also achieve better performance with fewer parameters compared to recent studies.
But some cons should also be noticed. Firstly, the illustration of proposed coRNN should be presented in the paper, which is more comprehensible. Secondly, the related work part, when mentioning the similar works, it will be better to describe the main differences and correction with this paper more specifically. Lastly, in the part of discussion, the practical significance of proposed coRNN should be emphasized with more words. |
ICLR | Title
Image Segmentation using Transfer Learning with DeepLabv3 to Facilitate Photogrammetric Limb Scanning
Abstract
In this paper, we explore the use of deep learning (DL) in conjunction with photogrammetry for scanning amputated limbs. Combining these two technologies can expand the scope of prosthetic telemedicine by facilitating low-cost limb scanning using cell phones. Previous research identified image segmentation as one of the main limitations of using photogrammetry for limb scanning. Based on those limitations, this work sought to answer two main research questions: (1) Can a neural network be trained to identify and segment an amputated limb automatically? (2) Will segmenting 2D limb images using neural networks impact the accuracy of 3D models generated via photogrammetry? To answer the first question, transfer learning was applied to a neural network with the DeepLabv3 architecture. After training, the model was able to successfully identify and segment limb images with an IoU of 79.9%. To answer the second question, the fine-tuned DL model was applied to a dataset of 22 scans comprising 6312 limb images, then 3D models were rendered utilizing Agisoft Metashape. The Mean Absolute Error (MAE) of models rendered from images segmented with DL was 0.57 mm ± 0.63 mm when compared to models rendered from ground truth images. These results are important because segmentation with DL makes photogrammetry for limb scanning feasible on a large clinical scale. Future work should focus on generalizing the segmentation model for different types of amputations and imaging conditions.
1 INTRODUCTION
Rehabilitative care for persons with limb loss is rapidly evolving due to advances in digital healthcare technologies. Novel digital workflows are empowering clinicians with tools for visualizing patient anatomy and physiology, designing custom fitting prostheses via computer aided design (CAD), building assistive devices with computer aided manufacturing (CAM), and tracking patient response in environments such as virtual reality (VR) Cabrera et al. (2021). Medical imaging technologies are fundamental to every digital workflow because they inform clinicians of limb geometry, surface and/or sub-surface features, plus pathology of amputated limbs Paxton et al. (2022).
Systematic reviews by Cabrera et al. (2021) and Paxton et al. (2022) identified photogrammetry as a promising technology for capturing patient surface anatomy. The main advantage of photogrammetric scanning is that models can be rendered using photographs captured via smartphones Cabrera et al. (2020); Barbero-Garcı́a et al. (2018); De Vivo Nicoloso et al. (2021); R. B. Taqriban et al. (2019); Ismail et al. (2020); Barbero-Garcı́a et al. (2020; 2021). Scanning with smartphones is significantly cheaper than other medical imaging modalities Cabrera et al. (2021); Paxton et al. (2022) and results in reliable and robust surface accuracy on par with existing clinical gold standard technologies Nightingale et al. (2020; 2021). Unfortunately, photogrammetry workflows often require extensive image segmentation, at the expense of human operator time and effort, in order to render 3D models Cabrera et al. (2021).
Segmentation is an important problem in medical imaging and involves separating regions of interest (ROIs) from the rest of an acquired image. Convolutional neural networks (CNNs) are regarded as the dominant state-of-the-art approach for medical image segmentation in applications requir-
ing high-accuracy Kumar et al. (2020); Wang et al. (2022). Deep convolutional neural networks (DCNNs), such as DeepLabv3, are able achieve high IoU performance when classifying pixels and outperform other CNN architectures Chen et al. (2017b). Using transfer learning, it is possible to fine-tune pre-trained deep neural networks with instances from the target domain Zhuang et al. (2020). Transfer learning is crucial to medical imaging because in many cases it is not possible to collect sufficient training data Kumar et al. (2020); Wang et al. (2022); Zhuang et al. (2020).
We hypothesize that automating image segmentation via DeepLabv3 then rendering the segmented images using photogrammetry could create an efficient processing pipeline for scanning amputated limbs with smartphones. Automatic segmentation of limb photographs would allow for more photographs to be taken during the scanning procedure thus increasing the sampling density. With these additional photographs, it would be possible to improve the coverage and accuracy of 3D models. Finally, segmentation could help correct for motion of the limb during the scanning procedure. These potential benefits would allow photogrammetric limb scanning to be used on a larger scale to reach more patients in the clinic and remotely via telemedicine.
2 BACKGROUND
2.1 PHOTOGRAMMETRY FOR MEDICAL IMAGING
In its simplest form, photogrammetry is the science of measuring size and shape of objects using images Fryer (1996). In this context, photogrammetry has been used extensively since the 1950’s for medical imaging and measurementNewton & Mitchell (1996). The development of digital cameras and the accompanying transition to digital photogrammetry has led to technologies for the reconstruction of 3D models using photogrammetric algorithms Linder (2016). Digital photogrammetry has been used successfully to reconstruct patient anatomy in many medical contexts such as cranial deformation scanning Barbero-Garcı́a et al. (2017; 2020; 2021), facial scanning Ross et al. (2018); Nightingale et al. (2020; 2021), and amputated limb scanning R. B. Taqriban et al. (2019); Cabrera et al. (2020); Ismail et al. (2020); De Vivo Nicoloso et al. (2021).
Two values for accuracy are commonly reported for photogrammetric models: Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). RMSE values will always be greater than or equal to MAE values and are more susceptible to outliers, but are recommended when errors are unbiased and follow a normal distribution Chai & Draxler (2014). Close range photogrammetric approaches have been proven to have accuracy comparable to clinical gold standard technologies Ross et al. (2018).
Using an Artec Spider structured light scanner as a clinical ”gold standard” reference, Nightingale et al. (2020) found that photogrammetric reconstructions of facial scans using 80 images captured had RMSE accuracy values of 1.3 mm ± 0.3 mm. In a similar study, Nightingale et al. (2021) achieved RMSE accuracy values of 1.5 mm ± 0.4 mm with 30 photographs on reconstructions of the external ear. Using spherical objects with known geometry as a reference, Barbero-Garcı́a et al. (2018) was able to achieve an MAE accuracy of 0.3 mm ± 0.2 mm using 95 images with tie point aids. In later research, these authors were able to achieve similar accuracy while scanning infant skulls with MAE accuracy values of 0.5 mm ± 0.4 mm and 200 images Barbero-Garcı́a et al. (2020).
While the accuracy of photogrammetry for anatomical scanning is very good, workflows involving photogrammetry require a great deal of human input often taking hours Barbero-Garcı́a et al. (2017); Cabrera et al. (2021). For this reason, recent research has focused on various methods for automating photogrammetric workflows for anotomical scanning Barbero-Garcı́a et al. (2020); Cabrera et al. (2020). Photogrammetric models are rendered following acquisition (not in real time) thus errors in the image acquisition stage may not become evident until after a patient is scanned and no longer present. Automated approaches have focused their attention on this acquisition stage to ensure completeness of the results Nocerino et al. (2017) with recent advances incorporating machine learning for landmark detection Barbero-Garcı́a et al. (2021).
Still, automation of photogrammetric image processing (specifically image segmentation) remains a large problem Cabrera et al. (2021). Automating this image segmentation step could dramatically increase the speed of photogrammetric workflows for medical imaging, improving the clinical viability.
2.2 IMAGE SEGMENTATION WITH DEEP LEARNING
Medical image segmentation plays an essential role in modern clinical practice by facilitating computer aided diagnoses and making patient anatomical structures clear Wang et al. (2022). Prior to recent developments in deep learning (DL), computer vision techniques such as k-nearest neighbors (kNN), decision trees (DT), support vector machines (SVM), and random forest (RF) models were utilized for segmentation and classification tasks Thanh Noi & Kappas (2017) Mahony et al. (2019). DL has superseded all of these approaches in medical imaging for several reasons, but most notably because the burden of feature engineering in DL shifts from humans to computers Shen et al. (2017).
CNNs are the most heavily researched DL architectures for medical image analysis J. Ker et al. (2018). CNN architectures are composed of several fundamental building blocks: Convolution Layers, Pooling Layers, Activation Functions, Fully Connected Layers, and Loss Functions Alzubaidi et al. (2021). Convolution Layers are the most significant portion of CNNs and comprise a collection of convolutional filters (referred to as kernels). Input images are convolved using these filters to create an output feature map. Pooling Layers sub-sample the feature maps effectively shrinking them to create smaller feature maps. They also enlarge the receptive fields of neurons in subsequent layers allowing for translation invariance of the architecture. Activation Functions introduce non-linearity into the neural network and map inputs to outputs by selectively choosing to fire neurons. Fully Connected Layers are typically located at the end of the CNN architecture, with inputs from the final convolution or pooling layer, and are used as the CNN classifier. Loss functions are typically utilized in the output layer to calculate the predicted error across training samples Alzubaidi et al. (2021); Goodfellow et al. (2016); Arif (2020).
The DeepLabv3 architecture expands on traditional CNNs by utilizing atrous convolution with spatial pyramid pooling Chen et al. (2017b). Atrous convolution utilizes a dilated convolution kernel that, rather than focusing on adjacent pixels, convolves pixels that are not immediate neighbors. Atrous convolution helps to resolve the issue of spatial resolution loss in feature maps from successive convolution and de-convolution steps by allowing for precise control of feature map density. Atrous Spatial Pyramid Pooling (ASPP) Chen et al. (2017a) utilizes four parallel atrous convolutions with different atrous rates to transform feature maps. This allows for accurate and efficient classification regions at arbitrary scales Chen et al. (2017b). These elements of the DeepLabv3 architecture result in better classification of pixels for semantic image segmentation, on par with other state-of-the-art models.
CNNs are prepared for medical imaging tasks via three major techniques: 1) Training CNNs from scratch using large datasets of labeled images. 2) Using ”off-the-shelf” features without retraining the CNN to complement hand crafted features. 3) Pre-training the CNN on natural or medical images, then fine-tuning on target images (i.e. transfer learning) Shin et al. (2016). Transfer learning is commonly used to reduced the computational time and cost of training a neural network from scratch Mahony et al. (2019); Wang et al. (2022); Shen et al. (2017). In medical imaging tasks, transfer learning has been shown to be as good as training CNNs from scratch and better in many cases Tajbakhsh et al. (2016); Shin et al. (2016). Recent research has shown great success in using transfer learning with the DeepLabv3 architecture for medical imaging segmentation Roy et al. (2020).
3 METHODS
3.1 TRANSFER LEARNING USING DEEPLABV3
We performed transfer learning on a pre-trained resnet-101 based DeepLabv3 model downloaded from the Pytorch Model Zoo Paszke et al. (2019). This model was pre-trained on the COCO train2017 data set comprising 200,000 images and 80 object categories Lin et al. (2015). Our finetuning dataset consisted of 806 manually labeled images of limbs with transtibial amputations. This data was not augmented. These photographs were taken in a variety of conditions (e.g. different angles, indoor, outdoor, bare skin, with liners etc.). Images were downsampled from 3072 x 4096 to 300 x 400. Following this, we randomly select 80% of the images as our training data set and reserve the remaining 20% for validation.
We used Google Colab for training the model with dynamically scaled computer resources. In terms of the hyperparameters in fine-tuning, we used the Adam algorithm Kingma & Ba (2017) for optimizing the parameters (e.g. weights) in the neural network with the default learning rate being set to 10−5. We employed the pixel-wise Mean Squared Error (MSE) as our loss function. We trained our fine-tuned neural network for 10 epochs using a batch size of 8 within each epoch.
3.2 SCANNING WORKFLOW
We captured 24 limb scans to test the applicability of the neural network model for image segmentation. 12 scans were performed with a single transtibial amputee wearing a standard liner then 12 additional scans were performed with the transtibial amputee wearing a colored sock over their standard liner. Scans were performed utilizing the Lim(b)itless smartphone application Cabrera et al. (2020) on a Motorola Moto G7 (MSRP $299, Camera - 12 MP, f/1.8, 1/2.8”) see Fig. 1.
The scanning procedure involved making two revolutions (one clockwise, one counterclockwise) around the outside of the limb while keeping the limb in frame of the application. Scans were taken at a constant radius of 30 cm from the limb, similar to Barbero-Garcı́a et al. (2018), at a rotation rate of two revolutions per minute. This application automatically initiates a capture sequence at 10 degree intervals with each capture sequence including a burst of 3 photographs. For two full revolutions, this captures a hyper-redundant 216 images (minimum) per scan as recommended by Barbero-Garcı́a et al. (2020). Actual scanning time averaged 52.4 s ± 6.1 s while the actual number of images captured averaged 287 ± 29, Table 1.
3.3 RENDERING PROCEDURE
All segmentation and rendering was performed on a desktop server running an Intel(R) Xeon(R) CPU E5-1607 v3 3.10 GHz, Nvidia Quadro k2200 4 GB DDR5, and 256 GB DDR4 RAM 1866 MHz. Of the 24 limb scans acquired in the previous step, two scans were manually segmented via Adobe Photoshop and set aside as ground truth references. The remaining 22 scans, totaling 6312 images, were fed into the fine-tuned neural network for segmentation. Segmentation time using the neural network averaged 890 s ± 111 s, Table 1.
3D limb models were rendered from these segmented images using Agisoft Metashape, Fig. 2. Agisoft is the most commonly used photogrammetric software in medical imaging being used by
Cabrera et al. (2020); Barbero-Garcı́a et al. (2018; 2017); Nightingale et al. (2020; 2021). This step of the workflow was automated with no manual tie point alignment or point cloud segmentation. The resulting STL meshes were not smoothed prior to export, unlike Ross et al. (2018); Nightingale et al. (2020; 2021). This Laplacian smoothing step was foregone to evaluate the impact of segmentation method on model rendering.
Finally, STL meshes of the scans rendered from automatically segmented images were compared with scans rendered from ground truth images via the CloudCompare open source 3D point cloud and mesh processing software. All limbs were cropped to the same boundaries, and outliers (visualized as extraneous floating points) were filtered out consistent with the technique used by Ross et al. (2018). Meshes were compared point by point via the cloud-to-mesh (C2M) measurement tool, with ground truth models being set as a reference.
4.1 SEGMENTATION VIA TRANSFER LEARNING
4 RESULTS AND DISCUSSION
Fig. 3 and Table 2 show the evolution of the fine tuned neural network over the transfer learning process. After 10 epochs of training, the fine tuned neural network showed excellent performance in segmenting images of transtibial amputations, see Fig. 4.
Looking at the training IoU curve, it can be observed that the training IoU performance increases with model epochs. This is to be expected as the training loss curve simultaneously decreases, reaching a local minimum at epoch 9. The descent rate of the loss function gradually slows as the number of epochs increases. Given this observation, training the model past 10 epochs would likely lead to over-fitting.
The testing loss function mirrors the training loss, reaching a local minimum at epoch 8. However, the testing IoU remains largely constant, with a value of 79.9% after 10 epochs of training. This IoU is comparable with values reported by Chen et al. (2017b) which reached peak mIoU values of 81.3% on the Cityscapes test set and 85.7% on the PASCAL VOC 2012 test set with the DeepLabv3 architecture.
The testing IoU value reported here could be improved in two ways: 1) Increasing the number of images in the labeled dataset 2) Reducing image downsampling. Increasing the number of images in the training dataset is feasible, but would require scanning additional persons with limb loss and manual segmentation. As with other forms of medical imaging, limited data quantities and
expert annotations present hurdles for deep learning techniques Wang et al. (2018). On the other hand, reducing the image downsampling could improve the model performance without requiring additional annotations. In Fig. 4 it can be observed that the masks generated by the neural network are partially limited in accuracy due to this downsampling. Still any reduction in downsampling would likely be accompanied by an increased computational cost.
4.2 PHOTOGRAMMETRY OF SEGMENTED SCANS
Table 3 lists the Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) for the 22 limb scans rendered as part of this study. The average MAE values obtained are excellent (0.57 mm ± 0.63 mm) and comparable with the sub-millimeter MAE accuracy reported by Barbero-Garcı́a et al. (2020). The RMSE values are higher than those reported by Nightingale et al. (2021; 2020) with an average deviation of 2.43 mm ± 1.10 mm. Scans 1-11 using the bare liner had slightly better MAE and RMSE values (0.32 mm ± 0.51 mm; 2.30 mm ± 1.01 mm, respectively) than scans 12-22 with a colored sock over the bare liner (0.77 mm ± 0.70 mm; 2.54 mm ± 1.21 mm). However the use of the colored sock improved tie point detection and image alignment leading to a successful render rate of 90.9 % for the colored sock vs 72.7% for the bare liner. Scans 2, 7, 8 and 14 did not render properly with the automated workflow. Failure was primarily due to failed photo alignment which led to incomplete and/or distorted models.
Looking at Fig. 5, it can be observed that the high RMSE values are primarily due to slight flexure of the limb between scans and not due to noise introduced by imperfect segmentation. Fig. 5B shows the CloudCompare results comparing Scan 12 to the ground truth. Scan 12 had a low MAE value (0.46 mm) but the highest RMSE value (5.54 mm) of all the scans. It is clear that high RMSE value is due to a change in limb geometry from flexure since the largest variations happen at the knee joint and at the distal end of the limb. This result was obtained even after controlling axis alignment and rotation of the models via the Iterative Closest Point (ICP) algorithm. This pattern is reflected in many other scans, see Fig. 5D and Fig. 5E.
Poor surface finish, as evidenced by Fig. 5E, contributes far less to the MAE and RMSE values than flexure of the limb. This is important because poor surface finish can be directly tied to the quality of image segmentation, with better segmentation leading to smoother finishes. Although the accuracy is slightly diminished due to noise, it is not the main cause of error in these scans. Poor surface finish can be accounted for in practice by utilizing Laplacian smoothing functions. Fig. 5C is an example of a limb with low MAE (0.17 mm) and low RMSE (1.26 mm) values. Even without Laplacian smoothing, this model shows remarkable rendering accuracy in comparison with the ground truth.
Since the RMSE errors are systematically impacted by limb flexure, MAE is the more appropriate measure of accuracy in these scans. For the purposes of rendering residual limb models, submillimeter accuracy presents little meaningful advantage to clinicians and practitioners. Unlike the skulls studied by Barbero-Garcı́a et al. (2017; 2020), amputated limbs undergo constant shape and deformation changes due to effects of donning-doffing, muscle contractions, interstitial fluid movements etc. Suyi Yang et al. (2019); Solav et al. (2019). Traditional methods utilizing Plaster of Paris (PoP) casting as well as clinical gold standards utilizing MRI struggle with shape capture repeatability Safari et al. (2013). Clinicians account for at least a 5-9% volumetric variation in limb size and accommodate for it in socket design Suyi Yang et al. (2019). Given this context, variation in limb shape and size between scans is to be expected.
4.3 CLINICAL IMPACT AND TELEMEDICINE DIRECTIONS
The advances in photogrammetric limb scanning resulting from automation of the segmentation step could have potentially significant consequences. Notably, automatic segmentation decreases the amount of human time and effort required to render photogrammetric models. As mentioned previously in the methods, a limb scan captured via the Lim(b)itless smartphone application captures at least 216 images. Manual segmentation of those images would take (at minimum) 1 minute per image leading to a segmentation time of 12,960 seconds. The fine-tuned network trained in this paper would only take 675 seconds to perform this task based on the segmentation rate reported in Table 1. This step is 19.2x faster and requires no human effort, demonstrating a remarkable increase
in performance. Evaluating the end to end process using the rate values in Table 1 reveals that the model rendering workflow is 4.88x faster overall. Although the process bottleneck shifts from segmentation rate to rendering rate, the only step requiring significant human input is the scanning stage which takes less than 1 minute on average, Table 1.
Models from the updated photogrammetric workflow could be used in a variety of contexts, since there was no significant geometrical difference between ground truth models and models rendered from images segmented by the fine-tuned neural network. Most notably, photogrammetric models built from smartphone scans can facilitate a host of low-cost telemedicine solutions. As one example, the limb models could be utilized as the first stage of digital prosthetic socket rectification. This would enable patients to be able to be scanned outside of a clinical environment (eg. at home) and have a prosthesis fabricated digitally and shipped to them. Aside from this, one distinct advantage of the photogrammetric process is it retains surface features which can be reprojected onto the final 3D model. With UV texture mapping, doctors could get a 3D view of limbs and look closely at their outer surfaces (such as for lesions) in virtual reality (VR). Photogrammetric limb scanning could also facilitate long-term and large scale studies of limb shape and size fluctuations over time. Being able to capture data outside of a clinical environment could help clinicians to increase the number of data points and the frequency of measurements beyond what is currently possible.
Several limitations would need to be overcome before using this technology at scale. On a technical level, the fine-tuned segmentation model from this research would need to be evaluated for robustness. Obtaining more scans of different skin colors, different lighting environments, etc. while training the model further could improve the robustness and reduce the risk of bias. It remains to be seen whether this model could be applied to segment different types of amputation or if the model would need to be trained on additional examples. Finally, as noted in Paxton et al. (2022), privacy and regulatory requirements will be the largest concern in implementing a telemedicine system at scale utilizing smartphone photogrammetry. Any healthcare solution would need to comply with the appropriate regulatory guidelines for telemedicine.
5 CONCLUSION
This research sought to address limitations in photogrammetric workflows by automating image segmentation via a fine-tuned deep learning model. Some important conclusions are summarized here:
• After 10 epochs of transfer learning, the fine-tuned deep learning model was able to achieve an IoU of 79.9% on the testing training set. The model’s loss function indicated that it had reached a local minimum, thus the IoU could not be improved without the risk of overfitting. IoU could be improved by increasing the number of labeled training examples which could also increase model robustness.
• Applying this fine-tuned model to segment a dataset of 22 scans containing 6312 images showed that there was no significant geometric difference compared to scans rendered from manually segmented images. The MAE was found to be on average 0.57 mm ± 0.63 mm while the RMSE was found to be on average 2.43 mm ± 1.10 mm. The error reported was mostly influenced by slight changes in limb shape between scans and not render noise caused by image segmentation.
• Using the fine-tuned model increased the rate of image segmentation by 19.2x as well as sped up the entire photogrammetric workflow by a factor of 4.88x. This performance increase is remarkable since the step requiring the largest human effort was completely automated.
• This fine-tuned deep learning model can help facilitate large scale image processing for telemedicine applications. Using smartphones to acquire 3D scans of amputated limbs, clinicians could monitor patients, conduct large scale research studies, and even build prostheses remotely. This could potentially increase the standard of care for persons with limb loss who do not live near a clinic by matching patients with clinicians seeking to expand their geographic service area. | 1. What is the focus and contribution of the paper regarding improving 3D models from amputated limbs scanning?
2. What are the strengths of the proposed approach, particularly its ability to increase sample density and improve accuracy?
3. What are the weaknesses of the paper, especially regarding its novelty and experiment scope?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions or suggestions regarding the selection of the DeepLab model or the failure cases in photo alignment? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The work proposes to improve the pipeline of amputated limbs scanning by using the deep learning-based image segmentation before rendering them with photogrammetry. In this way, the density of obtained samples can be increased to improve coverage and accuracy of 3D models with more samples. The proposed method is verified in practice in the limb scanning procedure followed by DeepLab model fine-tuning and inference and finally rendering of models from segmented images. The discussion of achieved results is also well done.
Strengths And Weaknesses
The problem statement is clearly explained and the method addresses an important problem of improving time and accuracy of healthcare procedures. A very detailed discussion of clinical impact and directions is presented at the end of the work, what helps with understanding potential applications, advantages and limitations of the method. A few ideas for improvements are also presented, showing that authors have a good understanding of the presented medical problem.
Unfortunately, the novelty of the work is limited. Very similar studies using deep learning segmentation topologies for improvements of photogrammetry have already been conducted. The work is an application of a known method to another computer vision dataset, but experiments focus only on a single model, so it may be difficult to draw some conclusions for future enhancements of such methods.
Also, the reason for selection of the specific DeepLab model hasn't been justified. It would be interesting to verify other models as well.
It's also mentioned that photo alignment led to some failure cases. There are various alignment techniques that could be used to improve this outcome. Have you tried any of them?
Clarity, Quality, Novelty And Reproducibility
The work is clear, but the novelty is limited in my opinion. It's an interesting confirmation of known techniques, applied to another dataset. The reproducibility of the work may be difficult due to the use of the dataset, which hasn't been made available publicly. |
ICLR | Title
Image Segmentation using Transfer Learning with DeepLabv3 to Facilitate Photogrammetric Limb Scanning
Abstract
In this paper, we explore the use of deep learning (DL) in conjunction with photogrammetry for scanning amputated limbs. Combining these two technologies can expand the scope of prosthetic telemedicine by facilitating low-cost limb scanning using cell phones. Previous research identified image segmentation as one of the main limitations of using photogrammetry for limb scanning. Based on those limitations, this work sought to answer two main research questions: (1) Can a neural network be trained to identify and segment an amputated limb automatically? (2) Will segmenting 2D limb images using neural networks impact the accuracy of 3D models generated via photogrammetry? To answer the first question, transfer learning was applied to a neural network with the DeepLabv3 architecture. After training, the model was able to successfully identify and segment limb images with an IoU of 79.9%. To answer the second question, the fine-tuned DL model was applied to a dataset of 22 scans comprising 6312 limb images, then 3D models were rendered utilizing Agisoft Metashape. The Mean Absolute Error (MAE) of models rendered from images segmented with DL was 0.57 mm ± 0.63 mm when compared to models rendered from ground truth images. These results are important because segmentation with DL makes photogrammetry for limb scanning feasible on a large clinical scale. Future work should focus on generalizing the segmentation model for different types of amputations and imaging conditions.
1 INTRODUCTION
Rehabilitative care for persons with limb loss is rapidly evolving due to advances in digital healthcare technologies. Novel digital workflows are empowering clinicians with tools for visualizing patient anatomy and physiology, designing custom fitting prostheses via computer aided design (CAD), building assistive devices with computer aided manufacturing (CAM), and tracking patient response in environments such as virtual reality (VR) Cabrera et al. (2021). Medical imaging technologies are fundamental to every digital workflow because they inform clinicians of limb geometry, surface and/or sub-surface features, plus pathology of amputated limbs Paxton et al. (2022).
Systematic reviews by Cabrera et al. (2021) and Paxton et al. (2022) identified photogrammetry as a promising technology for capturing patient surface anatomy. The main advantage of photogrammetric scanning is that models can be rendered using photographs captured via smartphones Cabrera et al. (2020); Barbero-Garcı́a et al. (2018); De Vivo Nicoloso et al. (2021); R. B. Taqriban et al. (2019); Ismail et al. (2020); Barbero-Garcı́a et al. (2020; 2021). Scanning with smartphones is significantly cheaper than other medical imaging modalities Cabrera et al. (2021); Paxton et al. (2022) and results in reliable and robust surface accuracy on par with existing clinical gold standard technologies Nightingale et al. (2020; 2021). Unfortunately, photogrammetry workflows often require extensive image segmentation, at the expense of human operator time and effort, in order to render 3D models Cabrera et al. (2021).
Segmentation is an important problem in medical imaging and involves separating regions of interest (ROIs) from the rest of an acquired image. Convolutional neural networks (CNNs) are regarded as the dominant state-of-the-art approach for medical image segmentation in applications requir-
ing high-accuracy Kumar et al. (2020); Wang et al. (2022). Deep convolutional neural networks (DCNNs), such as DeepLabv3, are able achieve high IoU performance when classifying pixels and outperform other CNN architectures Chen et al. (2017b). Using transfer learning, it is possible to fine-tune pre-trained deep neural networks with instances from the target domain Zhuang et al. (2020). Transfer learning is crucial to medical imaging because in many cases it is not possible to collect sufficient training data Kumar et al. (2020); Wang et al. (2022); Zhuang et al. (2020).
We hypothesize that automating image segmentation via DeepLabv3 then rendering the segmented images using photogrammetry could create an efficient processing pipeline for scanning amputated limbs with smartphones. Automatic segmentation of limb photographs would allow for more photographs to be taken during the scanning procedure thus increasing the sampling density. With these additional photographs, it would be possible to improve the coverage and accuracy of 3D models. Finally, segmentation could help correct for motion of the limb during the scanning procedure. These potential benefits would allow photogrammetric limb scanning to be used on a larger scale to reach more patients in the clinic and remotely via telemedicine.
2 BACKGROUND
2.1 PHOTOGRAMMETRY FOR MEDICAL IMAGING
In its simplest form, photogrammetry is the science of measuring size and shape of objects using images Fryer (1996). In this context, photogrammetry has been used extensively since the 1950’s for medical imaging and measurementNewton & Mitchell (1996). The development of digital cameras and the accompanying transition to digital photogrammetry has led to technologies for the reconstruction of 3D models using photogrammetric algorithms Linder (2016). Digital photogrammetry has been used successfully to reconstruct patient anatomy in many medical contexts such as cranial deformation scanning Barbero-Garcı́a et al. (2017; 2020; 2021), facial scanning Ross et al. (2018); Nightingale et al. (2020; 2021), and amputated limb scanning R. B. Taqriban et al. (2019); Cabrera et al. (2020); Ismail et al. (2020); De Vivo Nicoloso et al. (2021).
Two values for accuracy are commonly reported for photogrammetric models: Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). RMSE values will always be greater than or equal to MAE values and are more susceptible to outliers, but are recommended when errors are unbiased and follow a normal distribution Chai & Draxler (2014). Close range photogrammetric approaches have been proven to have accuracy comparable to clinical gold standard technologies Ross et al. (2018).
Using an Artec Spider structured light scanner as a clinical ”gold standard” reference, Nightingale et al. (2020) found that photogrammetric reconstructions of facial scans using 80 images captured had RMSE accuracy values of 1.3 mm ± 0.3 mm. In a similar study, Nightingale et al. (2021) achieved RMSE accuracy values of 1.5 mm ± 0.4 mm with 30 photographs on reconstructions of the external ear. Using spherical objects with known geometry as a reference, Barbero-Garcı́a et al. (2018) was able to achieve an MAE accuracy of 0.3 mm ± 0.2 mm using 95 images with tie point aids. In later research, these authors were able to achieve similar accuracy while scanning infant skulls with MAE accuracy values of 0.5 mm ± 0.4 mm and 200 images Barbero-Garcı́a et al. (2020).
While the accuracy of photogrammetry for anatomical scanning is very good, workflows involving photogrammetry require a great deal of human input often taking hours Barbero-Garcı́a et al. (2017); Cabrera et al. (2021). For this reason, recent research has focused on various methods for automating photogrammetric workflows for anotomical scanning Barbero-Garcı́a et al. (2020); Cabrera et al. (2020). Photogrammetric models are rendered following acquisition (not in real time) thus errors in the image acquisition stage may not become evident until after a patient is scanned and no longer present. Automated approaches have focused their attention on this acquisition stage to ensure completeness of the results Nocerino et al. (2017) with recent advances incorporating machine learning for landmark detection Barbero-Garcı́a et al. (2021).
Still, automation of photogrammetric image processing (specifically image segmentation) remains a large problem Cabrera et al. (2021). Automating this image segmentation step could dramatically increase the speed of photogrammetric workflows for medical imaging, improving the clinical viability.
2.2 IMAGE SEGMENTATION WITH DEEP LEARNING
Medical image segmentation plays an essential role in modern clinical practice by facilitating computer aided diagnoses and making patient anatomical structures clear Wang et al. (2022). Prior to recent developments in deep learning (DL), computer vision techniques such as k-nearest neighbors (kNN), decision trees (DT), support vector machines (SVM), and random forest (RF) models were utilized for segmentation and classification tasks Thanh Noi & Kappas (2017) Mahony et al. (2019). DL has superseded all of these approaches in medical imaging for several reasons, but most notably because the burden of feature engineering in DL shifts from humans to computers Shen et al. (2017).
CNNs are the most heavily researched DL architectures for medical image analysis J. Ker et al. (2018). CNN architectures are composed of several fundamental building blocks: Convolution Layers, Pooling Layers, Activation Functions, Fully Connected Layers, and Loss Functions Alzubaidi et al. (2021). Convolution Layers are the most significant portion of CNNs and comprise a collection of convolutional filters (referred to as kernels). Input images are convolved using these filters to create an output feature map. Pooling Layers sub-sample the feature maps effectively shrinking them to create smaller feature maps. They also enlarge the receptive fields of neurons in subsequent layers allowing for translation invariance of the architecture. Activation Functions introduce non-linearity into the neural network and map inputs to outputs by selectively choosing to fire neurons. Fully Connected Layers are typically located at the end of the CNN architecture, with inputs from the final convolution or pooling layer, and are used as the CNN classifier. Loss functions are typically utilized in the output layer to calculate the predicted error across training samples Alzubaidi et al. (2021); Goodfellow et al. (2016); Arif (2020).
The DeepLabv3 architecture expands on traditional CNNs by utilizing atrous convolution with spatial pyramid pooling Chen et al. (2017b). Atrous convolution utilizes a dilated convolution kernel that, rather than focusing on adjacent pixels, convolves pixels that are not immediate neighbors. Atrous convolution helps to resolve the issue of spatial resolution loss in feature maps from successive convolution and de-convolution steps by allowing for precise control of feature map density. Atrous Spatial Pyramid Pooling (ASPP) Chen et al. (2017a) utilizes four parallel atrous convolutions with different atrous rates to transform feature maps. This allows for accurate and efficient classification regions at arbitrary scales Chen et al. (2017b). These elements of the DeepLabv3 architecture result in better classification of pixels for semantic image segmentation, on par with other state-of-the-art models.
CNNs are prepared for medical imaging tasks via three major techniques: 1) Training CNNs from scratch using large datasets of labeled images. 2) Using ”off-the-shelf” features without retraining the CNN to complement hand crafted features. 3) Pre-training the CNN on natural or medical images, then fine-tuning on target images (i.e. transfer learning) Shin et al. (2016). Transfer learning is commonly used to reduced the computational time and cost of training a neural network from scratch Mahony et al. (2019); Wang et al. (2022); Shen et al. (2017). In medical imaging tasks, transfer learning has been shown to be as good as training CNNs from scratch and better in many cases Tajbakhsh et al. (2016); Shin et al. (2016). Recent research has shown great success in using transfer learning with the DeepLabv3 architecture for medical imaging segmentation Roy et al. (2020).
3 METHODS
3.1 TRANSFER LEARNING USING DEEPLABV3
We performed transfer learning on a pre-trained resnet-101 based DeepLabv3 model downloaded from the Pytorch Model Zoo Paszke et al. (2019). This model was pre-trained on the COCO train2017 data set comprising 200,000 images and 80 object categories Lin et al. (2015). Our finetuning dataset consisted of 806 manually labeled images of limbs with transtibial amputations. This data was not augmented. These photographs were taken in a variety of conditions (e.g. different angles, indoor, outdoor, bare skin, with liners etc.). Images were downsampled from 3072 x 4096 to 300 x 400. Following this, we randomly select 80% of the images as our training data set and reserve the remaining 20% for validation.
We used Google Colab for training the model with dynamically scaled computer resources. In terms of the hyperparameters in fine-tuning, we used the Adam algorithm Kingma & Ba (2017) for optimizing the parameters (e.g. weights) in the neural network with the default learning rate being set to 10−5. We employed the pixel-wise Mean Squared Error (MSE) as our loss function. We trained our fine-tuned neural network for 10 epochs using a batch size of 8 within each epoch.
3.2 SCANNING WORKFLOW
We captured 24 limb scans to test the applicability of the neural network model for image segmentation. 12 scans were performed with a single transtibial amputee wearing a standard liner then 12 additional scans were performed with the transtibial amputee wearing a colored sock over their standard liner. Scans were performed utilizing the Lim(b)itless smartphone application Cabrera et al. (2020) on a Motorola Moto G7 (MSRP $299, Camera - 12 MP, f/1.8, 1/2.8”) see Fig. 1.
The scanning procedure involved making two revolutions (one clockwise, one counterclockwise) around the outside of the limb while keeping the limb in frame of the application. Scans were taken at a constant radius of 30 cm from the limb, similar to Barbero-Garcı́a et al. (2018), at a rotation rate of two revolutions per minute. This application automatically initiates a capture sequence at 10 degree intervals with each capture sequence including a burst of 3 photographs. For two full revolutions, this captures a hyper-redundant 216 images (minimum) per scan as recommended by Barbero-Garcı́a et al. (2020). Actual scanning time averaged 52.4 s ± 6.1 s while the actual number of images captured averaged 287 ± 29, Table 1.
3.3 RENDERING PROCEDURE
All segmentation and rendering was performed on a desktop server running an Intel(R) Xeon(R) CPU E5-1607 v3 3.10 GHz, Nvidia Quadro k2200 4 GB DDR5, and 256 GB DDR4 RAM 1866 MHz. Of the 24 limb scans acquired in the previous step, two scans were manually segmented via Adobe Photoshop and set aside as ground truth references. The remaining 22 scans, totaling 6312 images, were fed into the fine-tuned neural network for segmentation. Segmentation time using the neural network averaged 890 s ± 111 s, Table 1.
3D limb models were rendered from these segmented images using Agisoft Metashape, Fig. 2. Agisoft is the most commonly used photogrammetric software in medical imaging being used by
Cabrera et al. (2020); Barbero-Garcı́a et al. (2018; 2017); Nightingale et al. (2020; 2021). This step of the workflow was automated with no manual tie point alignment or point cloud segmentation. The resulting STL meshes were not smoothed prior to export, unlike Ross et al. (2018); Nightingale et al. (2020; 2021). This Laplacian smoothing step was foregone to evaluate the impact of segmentation method on model rendering.
Finally, STL meshes of the scans rendered from automatically segmented images were compared with scans rendered from ground truth images via the CloudCompare open source 3D point cloud and mesh processing software. All limbs were cropped to the same boundaries, and outliers (visualized as extraneous floating points) were filtered out consistent with the technique used by Ross et al. (2018). Meshes were compared point by point via the cloud-to-mesh (C2M) measurement tool, with ground truth models being set as a reference.
4.1 SEGMENTATION VIA TRANSFER LEARNING
4 RESULTS AND DISCUSSION
Fig. 3 and Table 2 show the evolution of the fine tuned neural network over the transfer learning process. After 10 epochs of training, the fine tuned neural network showed excellent performance in segmenting images of transtibial amputations, see Fig. 4.
Looking at the training IoU curve, it can be observed that the training IoU performance increases with model epochs. This is to be expected as the training loss curve simultaneously decreases, reaching a local minimum at epoch 9. The descent rate of the loss function gradually slows as the number of epochs increases. Given this observation, training the model past 10 epochs would likely lead to over-fitting.
The testing loss function mirrors the training loss, reaching a local minimum at epoch 8. However, the testing IoU remains largely constant, with a value of 79.9% after 10 epochs of training. This IoU is comparable with values reported by Chen et al. (2017b) which reached peak mIoU values of 81.3% on the Cityscapes test set and 85.7% on the PASCAL VOC 2012 test set with the DeepLabv3 architecture.
The testing IoU value reported here could be improved in two ways: 1) Increasing the number of images in the labeled dataset 2) Reducing image downsampling. Increasing the number of images in the training dataset is feasible, but would require scanning additional persons with limb loss and manual segmentation. As with other forms of medical imaging, limited data quantities and
expert annotations present hurdles for deep learning techniques Wang et al. (2018). On the other hand, reducing the image downsampling could improve the model performance without requiring additional annotations. In Fig. 4 it can be observed that the masks generated by the neural network are partially limited in accuracy due to this downsampling. Still any reduction in downsampling would likely be accompanied by an increased computational cost.
4.2 PHOTOGRAMMETRY OF SEGMENTED SCANS
Table 3 lists the Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) for the 22 limb scans rendered as part of this study. The average MAE values obtained are excellent (0.57 mm ± 0.63 mm) and comparable with the sub-millimeter MAE accuracy reported by Barbero-Garcı́a et al. (2020). The RMSE values are higher than those reported by Nightingale et al. (2021; 2020) with an average deviation of 2.43 mm ± 1.10 mm. Scans 1-11 using the bare liner had slightly better MAE and RMSE values (0.32 mm ± 0.51 mm; 2.30 mm ± 1.01 mm, respectively) than scans 12-22 with a colored sock over the bare liner (0.77 mm ± 0.70 mm; 2.54 mm ± 1.21 mm). However the use of the colored sock improved tie point detection and image alignment leading to a successful render rate of 90.9 % for the colored sock vs 72.7% for the bare liner. Scans 2, 7, 8 and 14 did not render properly with the automated workflow. Failure was primarily due to failed photo alignment which led to incomplete and/or distorted models.
Looking at Fig. 5, it can be observed that the high RMSE values are primarily due to slight flexure of the limb between scans and not due to noise introduced by imperfect segmentation. Fig. 5B shows the CloudCompare results comparing Scan 12 to the ground truth. Scan 12 had a low MAE value (0.46 mm) but the highest RMSE value (5.54 mm) of all the scans. It is clear that high RMSE value is due to a change in limb geometry from flexure since the largest variations happen at the knee joint and at the distal end of the limb. This result was obtained even after controlling axis alignment and rotation of the models via the Iterative Closest Point (ICP) algorithm. This pattern is reflected in many other scans, see Fig. 5D and Fig. 5E.
Poor surface finish, as evidenced by Fig. 5E, contributes far less to the MAE and RMSE values than flexure of the limb. This is important because poor surface finish can be directly tied to the quality of image segmentation, with better segmentation leading to smoother finishes. Although the accuracy is slightly diminished due to noise, it is not the main cause of error in these scans. Poor surface finish can be accounted for in practice by utilizing Laplacian smoothing functions. Fig. 5C is an example of a limb with low MAE (0.17 mm) and low RMSE (1.26 mm) values. Even without Laplacian smoothing, this model shows remarkable rendering accuracy in comparison with the ground truth.
Since the RMSE errors are systematically impacted by limb flexure, MAE is the more appropriate measure of accuracy in these scans. For the purposes of rendering residual limb models, submillimeter accuracy presents little meaningful advantage to clinicians and practitioners. Unlike the skulls studied by Barbero-Garcı́a et al. (2017; 2020), amputated limbs undergo constant shape and deformation changes due to effects of donning-doffing, muscle contractions, interstitial fluid movements etc. Suyi Yang et al. (2019); Solav et al. (2019). Traditional methods utilizing Plaster of Paris (PoP) casting as well as clinical gold standards utilizing MRI struggle with shape capture repeatability Safari et al. (2013). Clinicians account for at least a 5-9% volumetric variation in limb size and accommodate for it in socket design Suyi Yang et al. (2019). Given this context, variation in limb shape and size between scans is to be expected.
4.3 CLINICAL IMPACT AND TELEMEDICINE DIRECTIONS
The advances in photogrammetric limb scanning resulting from automation of the segmentation step could have potentially significant consequences. Notably, automatic segmentation decreases the amount of human time and effort required to render photogrammetric models. As mentioned previously in the methods, a limb scan captured via the Lim(b)itless smartphone application captures at least 216 images. Manual segmentation of those images would take (at minimum) 1 minute per image leading to a segmentation time of 12,960 seconds. The fine-tuned network trained in this paper would only take 675 seconds to perform this task based on the segmentation rate reported in Table 1. This step is 19.2x faster and requires no human effort, demonstrating a remarkable increase
in performance. Evaluating the end to end process using the rate values in Table 1 reveals that the model rendering workflow is 4.88x faster overall. Although the process bottleneck shifts from segmentation rate to rendering rate, the only step requiring significant human input is the scanning stage which takes less than 1 minute on average, Table 1.
Models from the updated photogrammetric workflow could be used in a variety of contexts, since there was no significant geometrical difference between ground truth models and models rendered from images segmented by the fine-tuned neural network. Most notably, photogrammetric models built from smartphone scans can facilitate a host of low-cost telemedicine solutions. As one example, the limb models could be utilized as the first stage of digital prosthetic socket rectification. This would enable patients to be able to be scanned outside of a clinical environment (eg. at home) and have a prosthesis fabricated digitally and shipped to them. Aside from this, one distinct advantage of the photogrammetric process is it retains surface features which can be reprojected onto the final 3D model. With UV texture mapping, doctors could get a 3D view of limbs and look closely at their outer surfaces (such as for lesions) in virtual reality (VR). Photogrammetric limb scanning could also facilitate long-term and large scale studies of limb shape and size fluctuations over time. Being able to capture data outside of a clinical environment could help clinicians to increase the number of data points and the frequency of measurements beyond what is currently possible.
Several limitations would need to be overcome before using this technology at scale. On a technical level, the fine-tuned segmentation model from this research would need to be evaluated for robustness. Obtaining more scans of different skin colors, different lighting environments, etc. while training the model further could improve the robustness and reduce the risk of bias. It remains to be seen whether this model could be applied to segment different types of amputation or if the model would need to be trained on additional examples. Finally, as noted in Paxton et al. (2022), privacy and regulatory requirements will be the largest concern in implementing a telemedicine system at scale utilizing smartphone photogrammetry. Any healthcare solution would need to comply with the appropriate regulatory guidelines for telemedicine.
5 CONCLUSION
This research sought to address limitations in photogrammetric workflows by automating image segmentation via a fine-tuned deep learning model. Some important conclusions are summarized here:
• After 10 epochs of transfer learning, the fine-tuned deep learning model was able to achieve an IoU of 79.9% on the testing training set. The model’s loss function indicated that it had reached a local minimum, thus the IoU could not be improved without the risk of overfitting. IoU could be improved by increasing the number of labeled training examples which could also increase model robustness.
• Applying this fine-tuned model to segment a dataset of 22 scans containing 6312 images showed that there was no significant geometric difference compared to scans rendered from manually segmented images. The MAE was found to be on average 0.57 mm ± 0.63 mm while the RMSE was found to be on average 2.43 mm ± 1.10 mm. The error reported was mostly influenced by slight changes in limb shape between scans and not render noise caused by image segmentation.
• Using the fine-tuned model increased the rate of image segmentation by 19.2x as well as sped up the entire photogrammetric workflow by a factor of 4.88x. This performance increase is remarkable since the step requiring the largest human effort was completely automated.
• This fine-tuned deep learning model can help facilitate large scale image processing for telemedicine applications. Using smartphones to acquire 3D scans of amputated limbs, clinicians could monitor patients, conduct large scale research studies, and even build prostheses remotely. This could potentially increase the standard of care for persons with limb loss who do not live near a clinic by matching patients with clinicians seeking to expand their geographic service area. | 1. What is the focus and contribution of the paper regarding the use of deep learning and photogrammetry for scanning amputated limbs?
2. What are the strengths of the proposed approach, particularly in terms of its practical application and validation?
3. What are the weaknesses of the paper, especially regarding its technical novelty and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a method using deep learning with photogrammetry for scanning amputated limbs. The proposed method uses deep convolutional neural networks, DeepLabv3, and transfer learning. The ResNet-101-based Deep Labv3 model was pre-trained and then fine-tuned. The scan images were acquired by using a smartphone and subsequently segmented on a desktop. Afterward, 3D limb models were rendered from the segmented images using Agisoft Metashape.
Strengths And Weaknesses
Strength
An interesting work that can be very useful for telemedicine or point-of-care diagnosis
A workable system that uses a set of existing methods and technologies
Good validation
Weaknesses
Technical novelty is not significant
Lack of technical comparisons
Clarity, Quality, Novelty And Reproducibility
This paper is well-written and easy to follow. The idea is interesting. Experiments are presented with details. |
ICLR | Title
Image Segmentation using Transfer Learning with DeepLabv3 to Facilitate Photogrammetric Limb Scanning
Abstract
In this paper, we explore the use of deep learning (DL) in conjunction with photogrammetry for scanning amputated limbs. Combining these two technologies can expand the scope of prosthetic telemedicine by facilitating low-cost limb scanning using cell phones. Previous research identified image segmentation as one of the main limitations of using photogrammetry for limb scanning. Based on those limitations, this work sought to answer two main research questions: (1) Can a neural network be trained to identify and segment an amputated limb automatically? (2) Will segmenting 2D limb images using neural networks impact the accuracy of 3D models generated via photogrammetry? To answer the first question, transfer learning was applied to a neural network with the DeepLabv3 architecture. After training, the model was able to successfully identify and segment limb images with an IoU of 79.9%. To answer the second question, the fine-tuned DL model was applied to a dataset of 22 scans comprising 6312 limb images, then 3D models were rendered utilizing Agisoft Metashape. The Mean Absolute Error (MAE) of models rendered from images segmented with DL was 0.57 mm ± 0.63 mm when compared to models rendered from ground truth images. These results are important because segmentation with DL makes photogrammetry for limb scanning feasible on a large clinical scale. Future work should focus on generalizing the segmentation model for different types of amputations and imaging conditions.
1 INTRODUCTION
Rehabilitative care for persons with limb loss is rapidly evolving due to advances in digital healthcare technologies. Novel digital workflows are empowering clinicians with tools for visualizing patient anatomy and physiology, designing custom fitting prostheses via computer aided design (CAD), building assistive devices with computer aided manufacturing (CAM), and tracking patient response in environments such as virtual reality (VR) Cabrera et al. (2021). Medical imaging technologies are fundamental to every digital workflow because they inform clinicians of limb geometry, surface and/or sub-surface features, plus pathology of amputated limbs Paxton et al. (2022).
Systematic reviews by Cabrera et al. (2021) and Paxton et al. (2022) identified photogrammetry as a promising technology for capturing patient surface anatomy. The main advantage of photogrammetric scanning is that models can be rendered using photographs captured via smartphones Cabrera et al. (2020); Barbero-Garcı́a et al. (2018); De Vivo Nicoloso et al. (2021); R. B. Taqriban et al. (2019); Ismail et al. (2020); Barbero-Garcı́a et al. (2020; 2021). Scanning with smartphones is significantly cheaper than other medical imaging modalities Cabrera et al. (2021); Paxton et al. (2022) and results in reliable and robust surface accuracy on par with existing clinical gold standard technologies Nightingale et al. (2020; 2021). Unfortunately, photogrammetry workflows often require extensive image segmentation, at the expense of human operator time and effort, in order to render 3D models Cabrera et al. (2021).
Segmentation is an important problem in medical imaging and involves separating regions of interest (ROIs) from the rest of an acquired image. Convolutional neural networks (CNNs) are regarded as the dominant state-of-the-art approach for medical image segmentation in applications requir-
ing high-accuracy Kumar et al. (2020); Wang et al. (2022). Deep convolutional neural networks (DCNNs), such as DeepLabv3, are able achieve high IoU performance when classifying pixels and outperform other CNN architectures Chen et al. (2017b). Using transfer learning, it is possible to fine-tune pre-trained deep neural networks with instances from the target domain Zhuang et al. (2020). Transfer learning is crucial to medical imaging because in many cases it is not possible to collect sufficient training data Kumar et al. (2020); Wang et al. (2022); Zhuang et al. (2020).
We hypothesize that automating image segmentation via DeepLabv3 then rendering the segmented images using photogrammetry could create an efficient processing pipeline for scanning amputated limbs with smartphones. Automatic segmentation of limb photographs would allow for more photographs to be taken during the scanning procedure thus increasing the sampling density. With these additional photographs, it would be possible to improve the coverage and accuracy of 3D models. Finally, segmentation could help correct for motion of the limb during the scanning procedure. These potential benefits would allow photogrammetric limb scanning to be used on a larger scale to reach more patients in the clinic and remotely via telemedicine.
2 BACKGROUND
2.1 PHOTOGRAMMETRY FOR MEDICAL IMAGING
In its simplest form, photogrammetry is the science of measuring size and shape of objects using images Fryer (1996). In this context, photogrammetry has been used extensively since the 1950’s for medical imaging and measurementNewton & Mitchell (1996). The development of digital cameras and the accompanying transition to digital photogrammetry has led to technologies for the reconstruction of 3D models using photogrammetric algorithms Linder (2016). Digital photogrammetry has been used successfully to reconstruct patient anatomy in many medical contexts such as cranial deformation scanning Barbero-Garcı́a et al. (2017; 2020; 2021), facial scanning Ross et al. (2018); Nightingale et al. (2020; 2021), and amputated limb scanning R. B. Taqriban et al. (2019); Cabrera et al. (2020); Ismail et al. (2020); De Vivo Nicoloso et al. (2021).
Two values for accuracy are commonly reported for photogrammetric models: Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). RMSE values will always be greater than or equal to MAE values and are more susceptible to outliers, but are recommended when errors are unbiased and follow a normal distribution Chai & Draxler (2014). Close range photogrammetric approaches have been proven to have accuracy comparable to clinical gold standard technologies Ross et al. (2018).
Using an Artec Spider structured light scanner as a clinical ”gold standard” reference, Nightingale et al. (2020) found that photogrammetric reconstructions of facial scans using 80 images captured had RMSE accuracy values of 1.3 mm ± 0.3 mm. In a similar study, Nightingale et al. (2021) achieved RMSE accuracy values of 1.5 mm ± 0.4 mm with 30 photographs on reconstructions of the external ear. Using spherical objects with known geometry as a reference, Barbero-Garcı́a et al. (2018) was able to achieve an MAE accuracy of 0.3 mm ± 0.2 mm using 95 images with tie point aids. In later research, these authors were able to achieve similar accuracy while scanning infant skulls with MAE accuracy values of 0.5 mm ± 0.4 mm and 200 images Barbero-Garcı́a et al. (2020).
While the accuracy of photogrammetry for anatomical scanning is very good, workflows involving photogrammetry require a great deal of human input often taking hours Barbero-Garcı́a et al. (2017); Cabrera et al. (2021). For this reason, recent research has focused on various methods for automating photogrammetric workflows for anotomical scanning Barbero-Garcı́a et al. (2020); Cabrera et al. (2020). Photogrammetric models are rendered following acquisition (not in real time) thus errors in the image acquisition stage may not become evident until after a patient is scanned and no longer present. Automated approaches have focused their attention on this acquisition stage to ensure completeness of the results Nocerino et al. (2017) with recent advances incorporating machine learning for landmark detection Barbero-Garcı́a et al. (2021).
Still, automation of photogrammetric image processing (specifically image segmentation) remains a large problem Cabrera et al. (2021). Automating this image segmentation step could dramatically increase the speed of photogrammetric workflows for medical imaging, improving the clinical viability.
2.2 IMAGE SEGMENTATION WITH DEEP LEARNING
Medical image segmentation plays an essential role in modern clinical practice by facilitating computer aided diagnoses and making patient anatomical structures clear Wang et al. (2022). Prior to recent developments in deep learning (DL), computer vision techniques such as k-nearest neighbors (kNN), decision trees (DT), support vector machines (SVM), and random forest (RF) models were utilized for segmentation and classification tasks Thanh Noi & Kappas (2017) Mahony et al. (2019). DL has superseded all of these approaches in medical imaging for several reasons, but most notably because the burden of feature engineering in DL shifts from humans to computers Shen et al. (2017).
CNNs are the most heavily researched DL architectures for medical image analysis J. Ker et al. (2018). CNN architectures are composed of several fundamental building blocks: Convolution Layers, Pooling Layers, Activation Functions, Fully Connected Layers, and Loss Functions Alzubaidi et al. (2021). Convolution Layers are the most significant portion of CNNs and comprise a collection of convolutional filters (referred to as kernels). Input images are convolved using these filters to create an output feature map. Pooling Layers sub-sample the feature maps effectively shrinking them to create smaller feature maps. They also enlarge the receptive fields of neurons in subsequent layers allowing for translation invariance of the architecture. Activation Functions introduce non-linearity into the neural network and map inputs to outputs by selectively choosing to fire neurons. Fully Connected Layers are typically located at the end of the CNN architecture, with inputs from the final convolution or pooling layer, and are used as the CNN classifier. Loss functions are typically utilized in the output layer to calculate the predicted error across training samples Alzubaidi et al. (2021); Goodfellow et al. (2016); Arif (2020).
The DeepLabv3 architecture expands on traditional CNNs by utilizing atrous convolution with spatial pyramid pooling Chen et al. (2017b). Atrous convolution utilizes a dilated convolution kernel that, rather than focusing on adjacent pixels, convolves pixels that are not immediate neighbors. Atrous convolution helps to resolve the issue of spatial resolution loss in feature maps from successive convolution and de-convolution steps by allowing for precise control of feature map density. Atrous Spatial Pyramid Pooling (ASPP) Chen et al. (2017a) utilizes four parallel atrous convolutions with different atrous rates to transform feature maps. This allows for accurate and efficient classification regions at arbitrary scales Chen et al. (2017b). These elements of the DeepLabv3 architecture result in better classification of pixels for semantic image segmentation, on par with other state-of-the-art models.
CNNs are prepared for medical imaging tasks via three major techniques: 1) Training CNNs from scratch using large datasets of labeled images. 2) Using ”off-the-shelf” features without retraining the CNN to complement hand crafted features. 3) Pre-training the CNN on natural or medical images, then fine-tuning on target images (i.e. transfer learning) Shin et al. (2016). Transfer learning is commonly used to reduced the computational time and cost of training a neural network from scratch Mahony et al. (2019); Wang et al. (2022); Shen et al. (2017). In medical imaging tasks, transfer learning has been shown to be as good as training CNNs from scratch and better in many cases Tajbakhsh et al. (2016); Shin et al. (2016). Recent research has shown great success in using transfer learning with the DeepLabv3 architecture for medical imaging segmentation Roy et al. (2020).
3 METHODS
3.1 TRANSFER LEARNING USING DEEPLABV3
We performed transfer learning on a pre-trained resnet-101 based DeepLabv3 model downloaded from the Pytorch Model Zoo Paszke et al. (2019). This model was pre-trained on the COCO train2017 data set comprising 200,000 images and 80 object categories Lin et al. (2015). Our finetuning dataset consisted of 806 manually labeled images of limbs with transtibial amputations. This data was not augmented. These photographs were taken in a variety of conditions (e.g. different angles, indoor, outdoor, bare skin, with liners etc.). Images were downsampled from 3072 x 4096 to 300 x 400. Following this, we randomly select 80% of the images as our training data set and reserve the remaining 20% for validation.
We used Google Colab for training the model with dynamically scaled computer resources. In terms of the hyperparameters in fine-tuning, we used the Adam algorithm Kingma & Ba (2017) for optimizing the parameters (e.g. weights) in the neural network with the default learning rate being set to 10−5. We employed the pixel-wise Mean Squared Error (MSE) as our loss function. We trained our fine-tuned neural network for 10 epochs using a batch size of 8 within each epoch.
3.2 SCANNING WORKFLOW
We captured 24 limb scans to test the applicability of the neural network model for image segmentation. 12 scans were performed with a single transtibial amputee wearing a standard liner then 12 additional scans were performed with the transtibial amputee wearing a colored sock over their standard liner. Scans were performed utilizing the Lim(b)itless smartphone application Cabrera et al. (2020) on a Motorola Moto G7 (MSRP $299, Camera - 12 MP, f/1.8, 1/2.8”) see Fig. 1.
The scanning procedure involved making two revolutions (one clockwise, one counterclockwise) around the outside of the limb while keeping the limb in frame of the application. Scans were taken at a constant radius of 30 cm from the limb, similar to Barbero-Garcı́a et al. (2018), at a rotation rate of two revolutions per minute. This application automatically initiates a capture sequence at 10 degree intervals with each capture sequence including a burst of 3 photographs. For two full revolutions, this captures a hyper-redundant 216 images (minimum) per scan as recommended by Barbero-Garcı́a et al. (2020). Actual scanning time averaged 52.4 s ± 6.1 s while the actual number of images captured averaged 287 ± 29, Table 1.
3.3 RENDERING PROCEDURE
All segmentation and rendering was performed on a desktop server running an Intel(R) Xeon(R) CPU E5-1607 v3 3.10 GHz, Nvidia Quadro k2200 4 GB DDR5, and 256 GB DDR4 RAM 1866 MHz. Of the 24 limb scans acquired in the previous step, two scans were manually segmented via Adobe Photoshop and set aside as ground truth references. The remaining 22 scans, totaling 6312 images, were fed into the fine-tuned neural network for segmentation. Segmentation time using the neural network averaged 890 s ± 111 s, Table 1.
3D limb models were rendered from these segmented images using Agisoft Metashape, Fig. 2. Agisoft is the most commonly used photogrammetric software in medical imaging being used by
Cabrera et al. (2020); Barbero-Garcı́a et al. (2018; 2017); Nightingale et al. (2020; 2021). This step of the workflow was automated with no manual tie point alignment or point cloud segmentation. The resulting STL meshes were not smoothed prior to export, unlike Ross et al. (2018); Nightingale et al. (2020; 2021). This Laplacian smoothing step was foregone to evaluate the impact of segmentation method on model rendering.
Finally, STL meshes of the scans rendered from automatically segmented images were compared with scans rendered from ground truth images via the CloudCompare open source 3D point cloud and mesh processing software. All limbs were cropped to the same boundaries, and outliers (visualized as extraneous floating points) were filtered out consistent with the technique used by Ross et al. (2018). Meshes were compared point by point via the cloud-to-mesh (C2M) measurement tool, with ground truth models being set as a reference.
4.1 SEGMENTATION VIA TRANSFER LEARNING
4 RESULTS AND DISCUSSION
Fig. 3 and Table 2 show the evolution of the fine tuned neural network over the transfer learning process. After 10 epochs of training, the fine tuned neural network showed excellent performance in segmenting images of transtibial amputations, see Fig. 4.
Looking at the training IoU curve, it can be observed that the training IoU performance increases with model epochs. This is to be expected as the training loss curve simultaneously decreases, reaching a local minimum at epoch 9. The descent rate of the loss function gradually slows as the number of epochs increases. Given this observation, training the model past 10 epochs would likely lead to over-fitting.
The testing loss function mirrors the training loss, reaching a local minimum at epoch 8. However, the testing IoU remains largely constant, with a value of 79.9% after 10 epochs of training. This IoU is comparable with values reported by Chen et al. (2017b) which reached peak mIoU values of 81.3% on the Cityscapes test set and 85.7% on the PASCAL VOC 2012 test set with the DeepLabv3 architecture.
The testing IoU value reported here could be improved in two ways: 1) Increasing the number of images in the labeled dataset 2) Reducing image downsampling. Increasing the number of images in the training dataset is feasible, but would require scanning additional persons with limb loss and manual segmentation. As with other forms of medical imaging, limited data quantities and
expert annotations present hurdles for deep learning techniques Wang et al. (2018). On the other hand, reducing the image downsampling could improve the model performance without requiring additional annotations. In Fig. 4 it can be observed that the masks generated by the neural network are partially limited in accuracy due to this downsampling. Still any reduction in downsampling would likely be accompanied by an increased computational cost.
4.2 PHOTOGRAMMETRY OF SEGMENTED SCANS
Table 3 lists the Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) for the 22 limb scans rendered as part of this study. The average MAE values obtained are excellent (0.57 mm ± 0.63 mm) and comparable with the sub-millimeter MAE accuracy reported by Barbero-Garcı́a et al. (2020). The RMSE values are higher than those reported by Nightingale et al. (2021; 2020) with an average deviation of 2.43 mm ± 1.10 mm. Scans 1-11 using the bare liner had slightly better MAE and RMSE values (0.32 mm ± 0.51 mm; 2.30 mm ± 1.01 mm, respectively) than scans 12-22 with a colored sock over the bare liner (0.77 mm ± 0.70 mm; 2.54 mm ± 1.21 mm). However the use of the colored sock improved tie point detection and image alignment leading to a successful render rate of 90.9 % for the colored sock vs 72.7% for the bare liner. Scans 2, 7, 8 and 14 did not render properly with the automated workflow. Failure was primarily due to failed photo alignment which led to incomplete and/or distorted models.
Looking at Fig. 5, it can be observed that the high RMSE values are primarily due to slight flexure of the limb between scans and not due to noise introduced by imperfect segmentation. Fig. 5B shows the CloudCompare results comparing Scan 12 to the ground truth. Scan 12 had a low MAE value (0.46 mm) but the highest RMSE value (5.54 mm) of all the scans. It is clear that high RMSE value is due to a change in limb geometry from flexure since the largest variations happen at the knee joint and at the distal end of the limb. This result was obtained even after controlling axis alignment and rotation of the models via the Iterative Closest Point (ICP) algorithm. This pattern is reflected in many other scans, see Fig. 5D and Fig. 5E.
Poor surface finish, as evidenced by Fig. 5E, contributes far less to the MAE and RMSE values than flexure of the limb. This is important because poor surface finish can be directly tied to the quality of image segmentation, with better segmentation leading to smoother finishes. Although the accuracy is slightly diminished due to noise, it is not the main cause of error in these scans. Poor surface finish can be accounted for in practice by utilizing Laplacian smoothing functions. Fig. 5C is an example of a limb with low MAE (0.17 mm) and low RMSE (1.26 mm) values. Even without Laplacian smoothing, this model shows remarkable rendering accuracy in comparison with the ground truth.
Since the RMSE errors are systematically impacted by limb flexure, MAE is the more appropriate measure of accuracy in these scans. For the purposes of rendering residual limb models, submillimeter accuracy presents little meaningful advantage to clinicians and practitioners. Unlike the skulls studied by Barbero-Garcı́a et al. (2017; 2020), amputated limbs undergo constant shape and deformation changes due to effects of donning-doffing, muscle contractions, interstitial fluid movements etc. Suyi Yang et al. (2019); Solav et al. (2019). Traditional methods utilizing Plaster of Paris (PoP) casting as well as clinical gold standards utilizing MRI struggle with shape capture repeatability Safari et al. (2013). Clinicians account for at least a 5-9% volumetric variation in limb size and accommodate for it in socket design Suyi Yang et al. (2019). Given this context, variation in limb shape and size between scans is to be expected.
4.3 CLINICAL IMPACT AND TELEMEDICINE DIRECTIONS
The advances in photogrammetric limb scanning resulting from automation of the segmentation step could have potentially significant consequences. Notably, automatic segmentation decreases the amount of human time and effort required to render photogrammetric models. As mentioned previously in the methods, a limb scan captured via the Lim(b)itless smartphone application captures at least 216 images. Manual segmentation of those images would take (at minimum) 1 minute per image leading to a segmentation time of 12,960 seconds. The fine-tuned network trained in this paper would only take 675 seconds to perform this task based on the segmentation rate reported in Table 1. This step is 19.2x faster and requires no human effort, demonstrating a remarkable increase
in performance. Evaluating the end to end process using the rate values in Table 1 reveals that the model rendering workflow is 4.88x faster overall. Although the process bottleneck shifts from segmentation rate to rendering rate, the only step requiring significant human input is the scanning stage which takes less than 1 minute on average, Table 1.
Models from the updated photogrammetric workflow could be used in a variety of contexts, since there was no significant geometrical difference between ground truth models and models rendered from images segmented by the fine-tuned neural network. Most notably, photogrammetric models built from smartphone scans can facilitate a host of low-cost telemedicine solutions. As one example, the limb models could be utilized as the first stage of digital prosthetic socket rectification. This would enable patients to be able to be scanned outside of a clinical environment (eg. at home) and have a prosthesis fabricated digitally and shipped to them. Aside from this, one distinct advantage of the photogrammetric process is it retains surface features which can be reprojected onto the final 3D model. With UV texture mapping, doctors could get a 3D view of limbs and look closely at their outer surfaces (such as for lesions) in virtual reality (VR). Photogrammetric limb scanning could also facilitate long-term and large scale studies of limb shape and size fluctuations over time. Being able to capture data outside of a clinical environment could help clinicians to increase the number of data points and the frequency of measurements beyond what is currently possible.
Several limitations would need to be overcome before using this technology at scale. On a technical level, the fine-tuned segmentation model from this research would need to be evaluated for robustness. Obtaining more scans of different skin colors, different lighting environments, etc. while training the model further could improve the robustness and reduce the risk of bias. It remains to be seen whether this model could be applied to segment different types of amputation or if the model would need to be trained on additional examples. Finally, as noted in Paxton et al. (2022), privacy and regulatory requirements will be the largest concern in implementing a telemedicine system at scale utilizing smartphone photogrammetry. Any healthcare solution would need to comply with the appropriate regulatory guidelines for telemedicine.
5 CONCLUSION
This research sought to address limitations in photogrammetric workflows by automating image segmentation via a fine-tuned deep learning model. Some important conclusions are summarized here:
• After 10 epochs of transfer learning, the fine-tuned deep learning model was able to achieve an IoU of 79.9% on the testing training set. The model’s loss function indicated that it had reached a local minimum, thus the IoU could not be improved without the risk of overfitting. IoU could be improved by increasing the number of labeled training examples which could also increase model robustness.
• Applying this fine-tuned model to segment a dataset of 22 scans containing 6312 images showed that there was no significant geometric difference compared to scans rendered from manually segmented images. The MAE was found to be on average 0.57 mm ± 0.63 mm while the RMSE was found to be on average 2.43 mm ± 1.10 mm. The error reported was mostly influenced by slight changes in limb shape between scans and not render noise caused by image segmentation.
• Using the fine-tuned model increased the rate of image segmentation by 19.2x as well as sped up the entire photogrammetric workflow by a factor of 4.88x. This performance increase is remarkable since the step requiring the largest human effort was completely automated.
• This fine-tuned deep learning model can help facilitate large scale image processing for telemedicine applications. Using smartphones to acquire 3D scans of amputated limbs, clinicians could monitor patients, conduct large scale research studies, and even build prostheses remotely. This could potentially increase the standard of care for persons with limb loss who do not live near a clinic by matching patients with clinicians seeking to expand their geographic service area. | 1. What is the main contribution of the paper regarding image segmentation?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its application and novelty?
3. Do you have any concerns regarding the pipeline's train/validation split and subject usage?
4. How do you assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any additional practical problems or limitations of the pipeline that the authors could address? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper proposes the straightforward application of transfer learning for image segmentation for limb images using DeepLabV3. In addition to the standard IoU metric the authors also rendered 3D models and compared those of the automatic pipeline with manual segmentations. As expected the fine-tuned DeepLabV3 can produce similar quality segmentation and renderings as manual annotations.
Strengths And Weaknesses
Strengths
The paper is easy to follow and contains nice visualisation
The pipeline is evaluated not only for the segmentation quality but also its downstream task: 3D rendering
There are some interesting qualitative descriptions of the results
Weaknesses
There is little to no technical novelty, all employed methods have been previously presented and are simply applied to a new task
The description of the dataset and train/validation split is slightly confusing: it is stated that the "fine- tuning dataset consisted of 806 manually labeled images" and further "we randomly select 80% of the images as our training data set and reserve the remaining 20% for validation" does this mean the same subject can be part of training and validation?
Later on the authors state: "Of the 24 limb scans acquired in the previous step, two scans were manually segmented via Adobe Photoshop and set aside as ground truth references. The remaining 22 scans, totaling 6312 images, were fed into the fine-tuned neural network for segmentation.". Are those different subjects to the ones used for fine-tuning? Furthermore, since 24 scans correspond to 12 subjects with or without additional sock, again the question of whether the 22/2 split is done on a subject-level?
Overall the low number (2) of held-out ground truth cases makes it somewhat hard to trust those results, since some cherry-picking might be unavoidable
All models converge very quickly and no data augmentation seems to be used, this could indicate some overfitting
an additional practical problem of the pipeline seems to be failure of image alignment that caused scans 2, 7, 8 and 14 to not render properly with the automated workflow. I wonder whether the authors have attempted to use the automatic segmentation information or some other features from the fine-tuned network to improve on this?
the inference times for automatic segmentation seem awfully slow (0.32 images/s), even when being restricted to Google Colab infrastructure measuring the feed-forward path of a DeepLabV3 with mixed precision (AMP) on a Tesla T4 should result in at least two orders of magnitude faster throughput
Clarity, Quality, Novelty And Reproducibility
As mentioned above, while the paper is overall fairly clear there are a number of important details that should be clarified. The quality of the work judged from a technical view point is too low for an ICLR submission. While there is some practical value, the authors only applied standard deep learning tools to their dataset and processed them with another 3D rendering software. There are also no clear empirical insights, e.g. does the model capacity vs image dataset size influence the outcome? are there any augmentation strategies that should be considered? etc. |
ICLR | Title
Semantically Decomposing the Latent Spaces of Generative Adversarial Networks
Abstract
We propose a new algorithm for training generative adversarial networks that jointly learns latent codes for both identities (e.g. individual humans) and observations (e.g. specific photographs). By fixing the identity portion of the latent codes, we can generate diverse images of the same subject, and by fixing the observation portion, we can traverse the manifold of subjects while maintaining contingent aspects such as lighting and pose. Our algorithm features a pairwise training scheme in which each sample from the generator consists of two images with a common identity code. Corresponding samples from the real dataset consist of two distinct photographs of the same subject. In order to fool the discriminator, the generator must produce pairs that are photorealistic, distinct, and appear to depict the same individual. We augment both the DCGAN and BEGAN approaches with Siamese discriminators to facilitate pairwise training. Experiments with human judges and an off-the-shelf face verification system demonstrate our algorithm’s ability to generate convincing, identity-matched photographs.
1 INTRODUCTION
In many domains, a suitable generative process might consist of several stages. To generate a photograph of a product, we might wish to first sample from the space of products, and then from the space of photographs of that product. Given such disentangled representations in a multistage generative process, an online retailer might diversify its catalog, depicting products in a wider variety of settings. A retailer could also flip the process, imagining new products in a fixed setting. Datasets for such domains often contain many labeled identities with fewer observations of each (e.g. a collection of face portraits with thousands of people and ten photos of each). While we may know the identity of the subject in each photograph, we may not know the contingent aspects of the observation (such as lighting, pose and background). This kind of data is ubiquitous; given a set of commonalities, we might want to incorporate this structure into our latent representations.
Generative adversarial networks (GANs) learn mappings from latent codes z in some low-dimensional space Z to points in the space of natural data X (Goodfellow et al., 2014). They achieve this power through an adversarial training scheme pitting a generative model G : Z 7→ X against a discriminative model D : X 7→ [0, 1] in a minimax game. While GANs are popular, owing to their ability to generate high-fidelity images, they do not, in their original form, explicitly disentangle the latent factors according to known commonalities.
In this paper, we propose Semantically Decomposed GANs (SD-GANs), which encourage a specified portion of the latent space to correspond to a known source of variation.1,2 The technique
1Web demo: https://chrisdonahue.github.io/sdgan 2Source code: https://github.com/chrisdonahue/sdgan
decomposes the latent code Z into one portion ZI corresponding to identity, and the remaining portion ZO corresponding to the other contingent aspects of observations. SD-GANs learn through a pairwise training scheme in which each sample from the real dataset consists of two distinct images with a common identity. Each sample from the generator consists of a pair of images with common zI ∈ ZI but differing zO ∈ ZO. In order to fool the discriminator, the generator must not only produce diverse and photorealistic images, but also images that depict the same identity when zI is fixed. For SD-GANs, we modify the discriminator so that it can determine whether a pair of samples constitutes a match.
As a case study, we experiment with a dataset of face photographs, demonstrating that SD-GANs can generate contrasting images of the same subject (Figure 1; interactive web demo in footnote on previous page). The generator learns that certain properties are free to vary across observations but not identity. For example, SD-GANs learn that pose, facial expression, hirsuteness, grayscale vs. color, and lighting can all vary across different photographs of the same individual. On the other hand, the aspects that are more salient for facial verification remain consistent as we vary the observation code zO. We also train SD-GANs on a dataset of product images, containing multiple photographs of each product from various perspectives (Figure 4).
We demonstrate that SD-GANs trained on faces generate stylistically-contrasting, identity-matched image pairs that human annotators and a state-of-the-art face verification algorithm recognize as depicting the same subject. On measures of identity coherence and image diversity, SD-GANs perform comparably to a recent conditional GAN method (Odena et al., 2017); SD-GANs can also imagine new identities, while conditional GANs are limited to generating existing identities from the training data.
2 SEMANTICALLY DECOMPOSED GENERATIVE ADVERSARIAL NETWORKS
Before introducing our algorithm, we briefly review the prerequisite concepts.
2.1 GAN PRELIMINARIES
GANs leverage the discriminative power of neural networks to learn generative models. The generative model G ingests latent codes z, sampled from some known prior PZ , and produces G(z), a sample of an implicit distribution PG. The learning process consists of a minimax game between G, parameterized by θG, and a discriminative modelD, parameterized by θD. In the original formulation, the discriminative model tries to maximize log likelihood, yielding
min G max D V (G,D) = Ex∼PR [logD(x)] +Ez∼PZ [log(1−D(G(z)))]. (1)
Training proceeds as follows: For k iterations, sample one minibatch from the real distribution PR and one from the distribution of generated images PG, updating discriminator weights θD to increase V (G,D) by stochastic gradient ascent. Then sample a minibatch from PZ , updating θG to decrease V (G,D) by stochastic gradient descent.
Algorithm 1 Semantically Decomposed GAN Training 1: for n in 1:NumberOfIterations do 2: for m in 1:MinibatchSize do 3: Sample one identity vector zI ∼ Uniform([−1, 1]dI ). 4: Sample two observation vectors z1O, z 2 O ∼ Uniform([−1, 1]dO ).
5: z1 ← [zI ; z1O], z2 ← [zI ; z2O]. 6: Generate pair of images G(z1), G(z2), adding them to the minibatch with label 0 (fake). 7: for m in 1:MinibatchSize do 8: Sample one identity i ∈ I uniformly at random from the real data set. 9: Sample two images of i without replacement x1,x2 ∼ PR(x|I = i).
10: Add the pair to the minibatch, assigning label 1 (real). 11: Update discriminator weights by θD ← θD +∇θDV (G,D) using its stochastic gradient. 12: Sample another minibatch of identity-matched latent vectors z1, z2. 13: Update generator weights by stochastic gradient descent θG ← θG −∇θGV (G,D).
Zhao et al. (2017b) propose energy-based GANs (EBGANs), in which the discriminator can be viewed as an energy function. Specifically, they devise a discriminator consisting of an autoencoder: D(x) = Dd(De(x)). In the minimax game, the discriminator’s weights are updated to minimize the reconstruction error L(x) = ||x − D(x)|| for real data, while maximizing the error L(G(z)) for the generator. More recently, Berthelot et al. (2017) extend this work, introducing Boundary Equilibrium GANs (BEGANs), which optimize the Wasserstein distance (reminiscent of Wasserstein GANs (Arjovsky et al., 2017)) between autoencoder loss distributions, yielding the formulation:
VBEGAN (G,D) = L(x)− L(G(z)). (2)
Additionally, they introduce a method for stabilizing training. Positing that training becomes unstable when the discriminator cannot distinguish between real and generated images, they introduce a new hyperparameter γ, updating the value function on each iteration to maintain a desired ratio between the two reconstruction errors: E[L(G(z))] = γE[L(x)]. The BEGAN model produces what appear to us, subjectively, to be the sharpest images of faces yet generated by a GAN. In this work, we adapt both the DCGAN (Radford et al., 2016) and BEGAN algorithms to the SD-GAN training scheme.
2.2 SD-GAN FORMULATION
Consider the data’s identity as a random variable I in a discrete index set I . We seek to learn a latent representation that conveniently decomposes the variation in the real data into two parts: 1) due to I , and 2) due to the other factors of variation in the data, packaged as a random variable O. Ideally, the decomposition of the variation in the data into I and O should correspond exactly to a decomposition of the latent space Z = ZI ×ZO. This would permit convenient interpolation and other operations on the inferred subspaces ZI and ZO. A conventional GAN samples I,O from their joint distribution. Such a GAN’s generative model samples directly from an unstructured prior over the latent space. It does not disentangle the variation in O and I , for instance by modeling conditional distributions PG(O | I = i), but only models their average with respect to the prior on I .
Our SD-GAN method learns such a latent space decomposition, partitioning the coordinates of Z into two parts representing the subspaces, so that any z ∈ Z can be written as the concatenation [zI ; zO] of its identity representation zI ∈ RdI = ZI and its contingent aspect representation zO ∈ RdO = ZO. SD-GANs achieve this through a pairwise training scheme in which each sample from the real data consists of x1,x2 ∼ PR(x | I = i), a pair of images with a common identity i ∈ I . Each sample from the generator consists of G(z1), G(z2) ∼ PG(z | ZI = zI), a pair of images generated from a common identity vector zI ∈ ZI but i.i.d. observation vectors z1O, z2O ∈ ZO. We assign identity-matched pairs from PR the label 1 and zI -matched pairs from PG the label 0. The discriminator can thus learn to reject pairs for either of two primary reasons: 1) not photorealistic or 2) not plausibly depicting the same subject. See Algorithm 1 for SD-GAN training pseudocode.
2.3 SD-GAN DISCRIMINATOR ARCHITECTURE
With SD-GANs, there is no need to alter the architecture of the generator. However, the discriminator must now act upon two images, producing a single output. Moreover, the effects of the two input images x1,x2 on the output score are not independent. Two images might be otherwise photorealistic but deserve rejection because they clearly depict different identities. To this end, we devise two novel discriminator architectures to adapt DCGAN and BEGAN respectively. In both cases, we first separately encode each image using the same convolutional neural network De (Figure 2). We choose this Siamese setup (Bromley, 1994; Chopra et al., 2005) as our problem is symmetrical in the images, and thus it’s sensible to share weights between the encoders.
To adapt DCGAN, we stack the feature maps De(x1) and De(x2) along the channel axis, applying one additional strided convolution. This allows the network to further aggregate information from the two images before flattening and fully connecting to a sigmoid output. For BEGAN, because the discriminator is an autoencoder, our architecture is more complicated. After encoding each image, we concatenate the representations [De(x1);De(x2)] ∈ R2(dI+dO) and apply one fully connected bottleneck layer R2(dI+dO) ⇒ RdI+2dO with linear activation. In alignment with BEGAN, the SD-BEGAN bottleneck has the same dimensionality as the tuple of latent codes (zI , z1O, z 2 O) that generated the pair of images. Following the bottleneck, we apply a second FC layer RdI+2dO ⇒ R2(dI+dO), taking the first dI + dO components of its output to be the input to the first decoder and the second dI + dO components to be the input to the second decoder. The shared intermediate layer gives SD-BEGAN a mechanism to push apart matched and unmatched pairs. We specify our exact architectures in full detail in Appendix E.
3 EXPERIMENTS
We experimentally validate SD-GANs using two datasets: 1) the MS-Celeb-1M dataset of celebrity face images (Guo et al., 2016) and 2) a dataset of shoe images collected from Amazon (McAuley et al., 2015). Both datasets contain a large number of identities (people and shoes, respectively) with multiple observations of each. The “in-the-wild” nature of the celebrity face images offers a richer test bed for our method as both identities and contingent factors are significant sources of variation. In contrast, Amazon’s shoe images tend to vary only with camera perspective for a given product, making this data useful for sanity-checking our approach.
Faces From the aligned face images in the MS-Celeb-1M dataset, we select 12,500 celebrities at random and 8 associated images of each, resizing them to 64x64 pixels. We split the celebrities into subsets of 10,000 (training), 1,250 (validation) and 1,250 (test). The dataset has a small number of duplicate images and some label noise (images matched to the wrong celebrity). We detect and
remove duplicates by hashing the images, but we do not rid the data of label noise. We scale the pixel values to [−1, 1], performing no additional preprocessing or data augmentation.
Shoes Synthesizing novel product images is another promising domain for our method. In our shoes dataset, product photographs are captured against white backgrounds and primarily differ in orientation and distance. Accordingly, we expect that SD-GAN training will allocate the observation latent space to capture these aspects. We choose to study shoes as a prototypical example of a category of product images. The Amazon dataset contains around 3,000 unique products with the category “Shoe” and multiple product images. We use the same 80%, 10%, 10% split and again hash the images to ensure that the splits are disjoint. There are 6.2 photos of each product on average.
3.1 TRAINING DETAILS
We train SD-DCGANs on both of our datasets for 500,000 iterations using batches of 16 identitymatched pairs. To optimize SD-DCGAN, we use the Adam optimizer (Kingma & Ba, 2015) with hyperparameters α = 2e−4, β1 = 0.5, β2 = 0.999 as recommended by Radford et al. (2016). We also consider a non-Siamese discriminator that simply stacks the channels of the pair of real or fake images before encoding (SD-DCGAN-SC).
As in (Radford et al., 2016), we sample latent vectors z ∼ Uniform([−1, 1]100). For SD-GANs, we partition the latent codes according to zI ∈ RdI , zO ∈ R100−dI using values of dI = [25, 50, 75]. Our algorithm can be trivially applied with k-wise training (vs. pairwise). To explore the effects of using k > 2, we also experiment with an SD-DCGAN where we sample k = 4 instances each from PG(z | ZI = zI) for some zI ∈ ZI and from PR(x | I = i) for some i ∈ I. For all experiments, unless otherwise stated, we use dI = 50 and k = 2.
We also train an SD-BEGAN on both of our datasets. The increased complexity of the SD-BEGAN model significantly increases training time, limiting our ability to perform more-exhaustive hyperparameter validation (as we do for SD-DCGAN). We use the Adam optimizer with the default hyperparameters from (Kingma & Ba, 2015) for our SD-BEGAN experiments. While results from our SD-DCGAN k = 4 model are compelling, an experiment with a k = 4 variant of SD-BEGAN resulted in early mode collapse (Appendix F); hence, we excluded SD-BEGAN k = 4 from our evaluation.
We also compare to a DCGAN architecture trained using the auxiliary classifier GAN (AC-GAN) method (Odena et al., 2017). AC-GAN differs from SD-GAN in two key ways: 1) random identity codes zI are replaced by a one-hot embedding over all the identities in the training set (matrix of size 10000x50); 2) the AC-GAN method encourages that generated photos depict the proper identity by tasking its discriminator with predicting the identity of the generated or real image. Unlike SD-GANs, the AC-DCGAN model cannot imagine new identities; when generating from AC-DCGAN (for our quantitative comparisons to SD-GANs), we must sample a random identity from those existing in the training data.
3.2 EVALUATION
The evaluation of generative models is a fraught topic. Quantitative measures of sample quality can be poorly correlated with each other (Theis et al., 2016). Accordingly, we design an evaluation to match conceivable uses of our algorithm. Because we hope to produce diverse samples that humans deem to depict the same person, we evaluate the identity coherence of SD-GANs and baselines using both a pretrained face verification model and crowd-sourced human judgments obtained through Amazon’s Mechanical Turk platform.
3.2.1 QUANTITATIVE
Recent advancements in face verification using deep convolutional neural networks (Schroff et al., 2015; Parkhi et al., 2015; Wen et al., 2016) have yielded accuracy rivaling humans. For our evaluation, we procure FaceNet, a publicly-available face verifier based on the Inception-ResNet architecture (Szegedy et al., 2017). The FaceNet model was pretrained on the CASIA-WebFace dataset (Yi et al., 2014) and achieves 98.6% accuracy on the LFW benchmark (Huang et al., 2012).3
FaceNet ingests normalized, 160x160 color images and produces an embedding f(x) ∈ R128. The training objective for FaceNet is to learn embeddings that minimize the L2 distance between matched pairs of faces and maximize the distance for mismatched pairs. Accordingly, the embedding space yields a function for measuring the similarity between two faces x1 and x2: D(x1,x2) = ||f(x1) − f(x2)||22. Given two images, x1 and x2, we label them as a match if D(x1,x2) ≤ τv where τv is the accuracy-maximizing threshold on a class-balanced set of pairs from MS-Celeb-1M validation data. We use the same threshold for evaluating both real and synthetic data with FaceNet.
We compare the performance of FaceNet on pairs of images from the MS-Celeb-1M test set against generated samples from our trained SD-GAN models and AC-DCGAN baseline. To match FaceNet’s training data, we preprocess all images by resizing from 64x64 to 160x160, normalizing each image individually. We prepare 10,000 pairs from MS-Celeb-1M, half identity-matched and half unmatched. From each generative model, we generate 5,000 pairs each with z1I = z 2 I and 5,000 pairs with z1I 6= z2I . For each sample, we draw observation vectors zO randomly. We also want to ensure that identity-matched images produced by the generative models are diverse. To this end, we propose an intra-identity sample diversity (ID-Div) metric. The multi-scale structural similarity (MS-SSIM) (Wang et al., 2004) metric reports the similarity of two images on a scale from 0 (no resemblance) to 1 (identical images). We report 1 minus the mean MS-SSIM for all pairs
3“20170214-092102” pretrained model from https://github.com/davidsandberg/facenet
of identity-matched images as ID-Div. To measure the overall sample diversity (All-Div), we also compute 1 minus the mean similarity of 10k pairs with random identities.
In Table 1, we report the area under the receiver operating characteristic curve (AUC), accuracy, and false accept rate (FAR) of FaceNet (at threshold τv) on the real and generated data. We also report our proposed diversity statistics. FaceNet verifies pairs from the real data with 87% accuracy compared to 86% on pairs from our SD-BEGAN model. Though this is comparable to the accuracy achieved on pairs from the AC-DCGAN baseline, our model produces samples that are more diverse in pixel space (as measured by ID-Div and All-Div). FaceNet has a higher but comparable FAR for pairs from SD-GANs than those from AC-DCGAN; this indicates that SD-GANs may produce images that are less semantically diverse on average than AC-DCGAN.
We also report the combined memory footprint ofG and D for all methods in Table 1. For conditional GAN approaches, the number of parameters grows linearly with the number of identities in the training data. Especially in the case of the AC-GAN, where the discriminator computes a softmax over all identities, linear scaling may be prohibitive. While our 10k-identity subset of MS-Celeb-1M requires a 131MB AC-DCGAN model, an AC-DCGAN for all 1M identities would be over 8GB, with more than 97% of the parameters devoted to the weights in the discriminator’s softmax layer. In contrast, the complexity of SD-GAN is constant in the number of identities.
3.2.2 QUALITATIVE
In addition to validating that identity-matched SD-GAN samples are verified by FaceNet, we also demonstrate that humans are similarly convinced through experiments using Mechanical Turk. For these experiments, we use balanced subsets of 1,000 pairs from MS-Celeb-1M and the most promising generative methods from our FaceNet evaluation. We ask human annotators to determine if each pair depicts the “same person” or “different people”. Annotators are presented with batches of ten pairs at a time. Each pair is presented to three distinct annotators and predictions are determined by majority vote. Additionally, to provide a benchmark for assessing the quality of the Mechanical Turk ensembles, we (the authors) manually judged 200 pairs from MS-Celeb-1M. Results are in Table 1.
For all datasets, human annotators on Mechanical Turk answered “same person” less frequently than FaceNet when the latter uses the accuracy-maximizing threshold τv. Even on real data, balanced so that 50% of pairs are identity-matched, annotators report “same person” only 28% of the time (compared to the 41% of FaceNet). While annotators achieve higher accuracy on pairs from ACDCGAN than pairs from SD-BEGAN, they also answer “same person” 16% more often for ACDCGAN pairs than real data. In contrast, annotators answer “same person” at the same rate for SD-BEGAN pairs as real data. This may be attributable to the lower sample diversity produced by AC-DCGAN. Samples from SD-DCGAN and SD-BEGAN are shown in Figures 3 and 1 respectively.
4 RELATED WORK
Style transfer and novel view synthesis are active research areas. Early attempts to disentangle style and content manifolds used factored tensor representations (Tenenbaum & Freeman, 1997; Vasilescu & Terzopoulos, 2002; Elgammal & Lee, 2004; Tang et al., 2013), applying their results to face image synthesis. More recent work focuses on learning hierarchical feature representations using deep convolutional neural networks to separate identity and pose manifolds for faces (Zhu et al., 2013; Reed et al., 2014; Zhu et al., 2014; Yang et al., 2015; Kulkarni et al., 2015; Oord et al., 2016; Yan et al., 2016) and products (Dosovitskiy et al., 2015). Gatys et al. (2016) use features of a convolutional network, pretrained for image recognition, as a means for discovering content and style vectors.
Since their introduction (Goodfellow et al., 2014), GANs have been used to generate increasingly highquality images (Radford et al., 2016; Zhao et al., 2017b; Berthelot et al., 2017). Conditional GANs (cGANs), introduced by Mirza & Osindero (2014), extend GANs to generate class-conditional data. Odena et al. (2017) propose auxiliary classifier GANs, combining cGANs with a semi-supervised discriminator (Springenberg, 2015). Recently, cGANs have been used to ingest text (Reed et al., 2016) and full-resolution images (Isola et al., 2017; Liu et al., 2017; Zhu et al., 2017) as conditioning information, addressing a variety of image-to-image translation and style transfer tasks. Chen et al. (2016) devise an information-theoretic extension to GANs in which they maximize the mutual information between a subset of latent variables and the generated data. Their unsupervised method
appears to disentangle some intuitive factors of variation, but these factors may not correspond to those explicitly disentangled by SD-GANs.
Several related papers use GANs for novel view synthesis of faces. Tran et al. (2017); Huang et al. (2017); Yin et al. (2017a;b); Zhao et al. (2017a) all address synthesis of different body/facial poses conditioned on an input image (representing identity) and a fixed number of pose labels. Antipov et al. (2017) propose conditional GANs for synthesizing artificially-aged faces conditioned on both a face image and an age vector. These approaches all require explicit conditioning on the relevant factor (such as rotation, lighting and age) in addition to an identity image. In contrast, SD-GANs can model these contingent factors implicitly (without supervision).
Mathieu et al. (2016) combine GANs with a traditional reconstruction loss to disentangle identity. While their approach trains with an encoder-decoder generator, they enforce a variational bound on the encoder embedding, enabling them to sample from the decoder without an input image. Experiments with their method only address small (28x28) grayscale face images, and their training procedure is complex to reproduce. In contrast, our work offers a simpler approach and can synthesize higher-resolution, color photographs.
One might think of our work as offering the generative view of the Siamese networks often favored for learning similarity metrics (Bromley, 1994; Chopra et al., 2005). Such approaches are used for discriminative tasks like face or signature verification that share the many classes with few examples structure that we study here. In our work, we adopt a Siamese architecture in order to enable the discriminator to differentiate between matched and unmatched pairs. Recent work by Liu & Tuzel (2016) propose a GAN architecture with weight sharing across multiple generators and discriminators, but with a different problem formulation and objective from ours.
5 DISCUSSION
Our evaluation demonstrates that SD-GANs can disentangle those factors of variation corresponding to identity from the rest. Moreover, with SD-GANs we can sample never-before-seen identities, a benefit not shared by conditional GANs. In Figure 3, we demonstrate that by varying the observation vector zO, SD-GANs can change the color of clothing, add or remove sunnies, or change facial pose. They can also perturb the lighting, color saturation, and contrast of an image, all while keeping the apparent identity fixed. We note, subjectively, that samples from SD-DCGAN tend to appear less photorealistic than those from SD-BEGAN. Given a generator trained with SD-GAN, we can independently interpolate along the identity and observation manifolds (Figure 5).
On the shoe dataset, we find that the SD-DCGAN model produces convincing results. As desired, manipulating zI while keeping zO fixed yields distinct shoes in consistent poses (Figure 4). The identity code zI appears to capture the broad categories of shoes (sneakers, flip-flops, boots, etc.). Surprisingly, neither original BEGAN nor SD-BEGAN can produce diverse shoe images (Appendix G).
In this paper, we presented SD-GANs, a new algorithm capable of disentangling factors of variation according to known commonalities. We see several promising directions for future work. One logical extension is to disentangle latent factors corresponding to more than one known commonality. We also plan to apply our approach in other domains such as identity-conditioned speech synthesis.
ACKNOWLEDGEMENTS
The authors would like to thank Anima Anandkumar, John Berkowitz and Miller Puckette for their helpful feedback on this work. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI1053575 (Towns et al., 2014). GPUs used in this research were donated by the NVIDIA Corporation.
A ESTIMATING LATENT CODES
We estimate latent vectors for unseen images and demonstrate that the disentangled representations of SD-GANs can be used to depict the estimated identity with different contingent factors. In order to find a latent vector ẑ such that G(ẑ) (pretrained G) is similar to an unseen image x, we can minimize the distance between x and G(ẑ): minẑ ||G(ẑ)− x||22 (Lipton & Tripathi, 2017). In Figure 6, we depict estimation and linear interpolation across both subspaces for two pairs of images using SD-BEGAN. We also display the corresponding source images being estimated. For both pairs, ẑI (identity) is consistent in each row and ẑO (observation) is consistent in each column.
B PAIRWISE DISCRIMINATION OF EMBEDDINGS AND ENCODINGS
In Section 3.1, we describe an AC-GAN (Odena et al., 2017) baseline which uses an embedding matrix over real identities as latent identity codes (G : i, zO 7→ x̂). In place of random identity vectors, we tried combining this identity representation with pairwise discrimination (in the style of SD-GAN). In this experiment, the discriminator receives either either two real images with the same identity (x1i ,x 2 i ), or a real image with label i and synthetic image with label i (x 1 i , G(i, zO)). All other hyperparameters are the same as in our SD-DCGAN experiment (Section 3.1). We show results in Figure 7.
In Appendix C, we detail a modification of the DR-GAN (Tran et al., 2017) method which uses an encoding network Ge to transform images to identity representations (Gd : Ge(x), zO 7→ x̂). We also tried combining this encoder-decoder approach with pairwise discrimination. The discriminator
receives either two real images with the same identity (x1i ,x 2 i ), or (x 1 i , Gd(Ge(x 1 i ), zO). We show results in Figure 8.
While these experiments are exploratory and not part of our principle investigation, we find the results to be qualitatively promising. We are not the first to propose pairwise discrimination with pairs of (real, real) or (real, fake) images in GANs (Pathak et al., 2016; Isola et al., 2017).
C EXPLORATORY EXPERIMENT WITH DR-GANS
Tran et al. (2017) propose Disentangled Representation learning-GAN (DR-GAN), an approach to face frontalization with similar setup to our SD-GAN algorithm. The (single-image) DR-GAN generatorG (composition ofGe andGd) accepts an input image x, a pose code c, and a noise vector z. The DR-GAN discriminator receives either x or x̂ = Gd(Ge(x), c, z). In the style of (Springenberg, 2015), the discriminator is tasked with determining not only if the image is real or fake, but also classifying the pose c, suggesting a disentangled representation to the generator. Through their experiments, they demonstrate that DR-GAN can explicitly disentangle pose and illumination (c) from the rest of the latent space (Ge(x); z).
In addition to our AC-DCGAN baseline (Odena et al., 2017), we tried modifying DR-GAN to only disentangle identity (rather than both identity and pose in the original paper). We used the DCGAN (Radford et al., 2016) discriminator architecture (Table 4) as Ge, linearly projecting the final convolutional layer to Ge(x) ∈ R50 (in alignment with our SD-GAN experiments). We altered the discriminator to predict the identity of x or x̂, rather than pose information (which is unknown in our experimental setup). With these modifications, Ge(x) is analogous to zI in the SD-GAN generator, and z is analogous to zO. Furthermore, this setup is identical to the AC-DCGAN baseline
except that the embedding matrix is replaced by an encoding network Ge. Unfortunately, we found that the generator quickly learned to produce a single output image x̂ for each input x regardless of observation code z (Figure 9). Accordingly, we excluded this experiment from our evaluation (Section 3.2).
D IMAGINING IDENTITIES WITH AC-GAN
As stated in Section 3.1, AC-GANs Odena et al. (2017) provide no obvious way to imagine new identities. For our evaluation (Section 3.2), the AC-GAN generator receives identity input zI ∈ [0, 1]10000: a one-hot over all identities. One possible approach to imagining new identities would be to query a trained AC-GAN generator with a random vector zI such that ∑10000 i=1 zI[i] = 1. We found that this strategy produced little identity variety (Figure 10) compared to the normal one-hot strategy (Figure 11) and excluded it from our evaluation.
E ARCHITECTURE DESCRIPTIONS
We list here the full architectural details for our SD-DCGAN and SD-BEGAN models. In these descriptions, k is the number of images that the generator produces and discriminator observes per identity (usually 2 for pairwise training), and dI is the number of dimensions in the latent space ZI (identity). In our experiments, dimensionality of ZO is always 100− dI . As a concrete example, the bottleneck layer of the SD-BEGAN discriminator autoencoder (“fc2” in Table 6) with k = 2, dI = 50 has output dimensionality 150.
We emphasize that generators are parameterized by k in the tables only for clarity and symmetry with the discriminators. Implementations need not modify the generator; instead, k can be collapsed into the batch size.
For the stacked-channels versions of these discriminators, we simply change the number of input image channels from 3 to 3k and set k = 1 wherever k appears in the table.
F FACE SAMPLES
We present samples from each model reported in Table 1 for qualitative comparison. In each matrix, zI is the same across all images in a row and zO is the same across all images in a column. We draw identity and observation vectors randomly for these samples.
G SHOE SAMPLES
We present samples from an SD-DCGAN and SD-BEGAN trained on our shoes dataset. | 1. What is the novel approach introduced by the paper in the field of disentangled generative models?
2. How effective and simple is the proposed method in disentangling identity from other factors of variation?
3. Can the method be extended to disentangle more than two factors, such as lighting, pose, viewpoint, etc.?
4. How does the reviewer assess the significance and impact of the paper's contribution to the line of work on disentangled generative models?
5. What is the suggestion made by the reviewer for an additional experiment to demonstrate the model's capability? | Review | Review
Quality
The paper is well written and the model is simple and clearly explained. The idea for disentangling identity from other factors of variation using identity-matched image pairs is quite simple, but the experimental results on faces and shoes are impressive.
Clarity
The model and its training objective are simple and clearly explained.
Originality
There are now many, many papers on generative models with disentangled feature representations, including with GANs. However, to my knowledge this is the first paper showing very compelling results using this particular setup of identity-aligned images.
Significance
Disentangled generative models are an important line of work in my opinion. This paper presents a very simple but apparently effective way of disentangling identity from other factors, and implements in two of the more recent GAN architectures.
Suggestion for an experiment - can you do few shot image generation? A simple way to do it would be to train an encoder from image → identity encoding. Then, given one or a few images of a new person’s face or a new shoe, you could estimate the identity latent variable, and then generate many additional samples.
Pros
- Very simple and effective disentangling technique for GANs.
- Great execution, compelling samples on both faces and shoes.
Cons
- Only two factors of variations are disentangled in this model. Could it be generalized to specify more than just two, e.g. lighting, pose, viewpoint, etc?
- Not much technically new or surprising compared to past work on disentangling generative models. |
ICLR | Title
Semantically Decomposing the Latent Spaces of Generative Adversarial Networks
Abstract
We propose a new algorithm for training generative adversarial networks that jointly learns latent codes for both identities (e.g. individual humans) and observations (e.g. specific photographs). By fixing the identity portion of the latent codes, we can generate diverse images of the same subject, and by fixing the observation portion, we can traverse the manifold of subjects while maintaining contingent aspects such as lighting and pose. Our algorithm features a pairwise training scheme in which each sample from the generator consists of two images with a common identity code. Corresponding samples from the real dataset consist of two distinct photographs of the same subject. In order to fool the discriminator, the generator must produce pairs that are photorealistic, distinct, and appear to depict the same individual. We augment both the DCGAN and BEGAN approaches with Siamese discriminators to facilitate pairwise training. Experiments with human judges and an off-the-shelf face verification system demonstrate our algorithm’s ability to generate convincing, identity-matched photographs.
1 INTRODUCTION
In many domains, a suitable generative process might consist of several stages. To generate a photograph of a product, we might wish to first sample from the space of products, and then from the space of photographs of that product. Given such disentangled representations in a multistage generative process, an online retailer might diversify its catalog, depicting products in a wider variety of settings. A retailer could also flip the process, imagining new products in a fixed setting. Datasets for such domains often contain many labeled identities with fewer observations of each (e.g. a collection of face portraits with thousands of people and ten photos of each). While we may know the identity of the subject in each photograph, we may not know the contingent aspects of the observation (such as lighting, pose and background). This kind of data is ubiquitous; given a set of commonalities, we might want to incorporate this structure into our latent representations.
Generative adversarial networks (GANs) learn mappings from latent codes z in some low-dimensional space Z to points in the space of natural data X (Goodfellow et al., 2014). They achieve this power through an adversarial training scheme pitting a generative model G : Z 7→ X against a discriminative model D : X 7→ [0, 1] in a minimax game. While GANs are popular, owing to their ability to generate high-fidelity images, they do not, in their original form, explicitly disentangle the latent factors according to known commonalities.
In this paper, we propose Semantically Decomposed GANs (SD-GANs), which encourage a specified portion of the latent space to correspond to a known source of variation.1,2 The technique
1Web demo: https://chrisdonahue.github.io/sdgan 2Source code: https://github.com/chrisdonahue/sdgan
decomposes the latent code Z into one portion ZI corresponding to identity, and the remaining portion ZO corresponding to the other contingent aspects of observations. SD-GANs learn through a pairwise training scheme in which each sample from the real dataset consists of two distinct images with a common identity. Each sample from the generator consists of a pair of images with common zI ∈ ZI but differing zO ∈ ZO. In order to fool the discriminator, the generator must not only produce diverse and photorealistic images, but also images that depict the same identity when zI is fixed. For SD-GANs, we modify the discriminator so that it can determine whether a pair of samples constitutes a match.
As a case study, we experiment with a dataset of face photographs, demonstrating that SD-GANs can generate contrasting images of the same subject (Figure 1; interactive web demo in footnote on previous page). The generator learns that certain properties are free to vary across observations but not identity. For example, SD-GANs learn that pose, facial expression, hirsuteness, grayscale vs. color, and lighting can all vary across different photographs of the same individual. On the other hand, the aspects that are more salient for facial verification remain consistent as we vary the observation code zO. We also train SD-GANs on a dataset of product images, containing multiple photographs of each product from various perspectives (Figure 4).
We demonstrate that SD-GANs trained on faces generate stylistically-contrasting, identity-matched image pairs that human annotators and a state-of-the-art face verification algorithm recognize as depicting the same subject. On measures of identity coherence and image diversity, SD-GANs perform comparably to a recent conditional GAN method (Odena et al., 2017); SD-GANs can also imagine new identities, while conditional GANs are limited to generating existing identities from the training data.
2 SEMANTICALLY DECOMPOSED GENERATIVE ADVERSARIAL NETWORKS
Before introducing our algorithm, we briefly review the prerequisite concepts.
2.1 GAN PRELIMINARIES
GANs leverage the discriminative power of neural networks to learn generative models. The generative model G ingests latent codes z, sampled from some known prior PZ , and produces G(z), a sample of an implicit distribution PG. The learning process consists of a minimax game between G, parameterized by θG, and a discriminative modelD, parameterized by θD. In the original formulation, the discriminative model tries to maximize log likelihood, yielding
min G max D V (G,D) = Ex∼PR [logD(x)] +Ez∼PZ [log(1−D(G(z)))]. (1)
Training proceeds as follows: For k iterations, sample one minibatch from the real distribution PR and one from the distribution of generated images PG, updating discriminator weights θD to increase V (G,D) by stochastic gradient ascent. Then sample a minibatch from PZ , updating θG to decrease V (G,D) by stochastic gradient descent.
Algorithm 1 Semantically Decomposed GAN Training 1: for n in 1:NumberOfIterations do 2: for m in 1:MinibatchSize do 3: Sample one identity vector zI ∼ Uniform([−1, 1]dI ). 4: Sample two observation vectors z1O, z 2 O ∼ Uniform([−1, 1]dO ).
5: z1 ← [zI ; z1O], z2 ← [zI ; z2O]. 6: Generate pair of images G(z1), G(z2), adding them to the minibatch with label 0 (fake). 7: for m in 1:MinibatchSize do 8: Sample one identity i ∈ I uniformly at random from the real data set. 9: Sample two images of i without replacement x1,x2 ∼ PR(x|I = i).
10: Add the pair to the minibatch, assigning label 1 (real). 11: Update discriminator weights by θD ← θD +∇θDV (G,D) using its stochastic gradient. 12: Sample another minibatch of identity-matched latent vectors z1, z2. 13: Update generator weights by stochastic gradient descent θG ← θG −∇θGV (G,D).
Zhao et al. (2017b) propose energy-based GANs (EBGANs), in which the discriminator can be viewed as an energy function. Specifically, they devise a discriminator consisting of an autoencoder: D(x) = Dd(De(x)). In the minimax game, the discriminator’s weights are updated to minimize the reconstruction error L(x) = ||x − D(x)|| for real data, while maximizing the error L(G(z)) for the generator. More recently, Berthelot et al. (2017) extend this work, introducing Boundary Equilibrium GANs (BEGANs), which optimize the Wasserstein distance (reminiscent of Wasserstein GANs (Arjovsky et al., 2017)) between autoencoder loss distributions, yielding the formulation:
VBEGAN (G,D) = L(x)− L(G(z)). (2)
Additionally, they introduce a method for stabilizing training. Positing that training becomes unstable when the discriminator cannot distinguish between real and generated images, they introduce a new hyperparameter γ, updating the value function on each iteration to maintain a desired ratio between the two reconstruction errors: E[L(G(z))] = γE[L(x)]. The BEGAN model produces what appear to us, subjectively, to be the sharpest images of faces yet generated by a GAN. In this work, we adapt both the DCGAN (Radford et al., 2016) and BEGAN algorithms to the SD-GAN training scheme.
2.2 SD-GAN FORMULATION
Consider the data’s identity as a random variable I in a discrete index set I . We seek to learn a latent representation that conveniently decomposes the variation in the real data into two parts: 1) due to I , and 2) due to the other factors of variation in the data, packaged as a random variable O. Ideally, the decomposition of the variation in the data into I and O should correspond exactly to a decomposition of the latent space Z = ZI ×ZO. This would permit convenient interpolation and other operations on the inferred subspaces ZI and ZO. A conventional GAN samples I,O from their joint distribution. Such a GAN’s generative model samples directly from an unstructured prior over the latent space. It does not disentangle the variation in O and I , for instance by modeling conditional distributions PG(O | I = i), but only models their average with respect to the prior on I .
Our SD-GAN method learns such a latent space decomposition, partitioning the coordinates of Z into two parts representing the subspaces, so that any z ∈ Z can be written as the concatenation [zI ; zO] of its identity representation zI ∈ RdI = ZI and its contingent aspect representation zO ∈ RdO = ZO. SD-GANs achieve this through a pairwise training scheme in which each sample from the real data consists of x1,x2 ∼ PR(x | I = i), a pair of images with a common identity i ∈ I . Each sample from the generator consists of G(z1), G(z2) ∼ PG(z | ZI = zI), a pair of images generated from a common identity vector zI ∈ ZI but i.i.d. observation vectors z1O, z2O ∈ ZO. We assign identity-matched pairs from PR the label 1 and zI -matched pairs from PG the label 0. The discriminator can thus learn to reject pairs for either of two primary reasons: 1) not photorealistic or 2) not plausibly depicting the same subject. See Algorithm 1 for SD-GAN training pseudocode.
2.3 SD-GAN DISCRIMINATOR ARCHITECTURE
With SD-GANs, there is no need to alter the architecture of the generator. However, the discriminator must now act upon two images, producing a single output. Moreover, the effects of the two input images x1,x2 on the output score are not independent. Two images might be otherwise photorealistic but deserve rejection because they clearly depict different identities. To this end, we devise two novel discriminator architectures to adapt DCGAN and BEGAN respectively. In both cases, we first separately encode each image using the same convolutional neural network De (Figure 2). We choose this Siamese setup (Bromley, 1994; Chopra et al., 2005) as our problem is symmetrical in the images, and thus it’s sensible to share weights between the encoders.
To adapt DCGAN, we stack the feature maps De(x1) and De(x2) along the channel axis, applying one additional strided convolution. This allows the network to further aggregate information from the two images before flattening and fully connecting to a sigmoid output. For BEGAN, because the discriminator is an autoencoder, our architecture is more complicated. After encoding each image, we concatenate the representations [De(x1);De(x2)] ∈ R2(dI+dO) and apply one fully connected bottleneck layer R2(dI+dO) ⇒ RdI+2dO with linear activation. In alignment with BEGAN, the SD-BEGAN bottleneck has the same dimensionality as the tuple of latent codes (zI , z1O, z 2 O) that generated the pair of images. Following the bottleneck, we apply a second FC layer RdI+2dO ⇒ R2(dI+dO), taking the first dI + dO components of its output to be the input to the first decoder and the second dI + dO components to be the input to the second decoder. The shared intermediate layer gives SD-BEGAN a mechanism to push apart matched and unmatched pairs. We specify our exact architectures in full detail in Appendix E.
3 EXPERIMENTS
We experimentally validate SD-GANs using two datasets: 1) the MS-Celeb-1M dataset of celebrity face images (Guo et al., 2016) and 2) a dataset of shoe images collected from Amazon (McAuley et al., 2015). Both datasets contain a large number of identities (people and shoes, respectively) with multiple observations of each. The “in-the-wild” nature of the celebrity face images offers a richer test bed for our method as both identities and contingent factors are significant sources of variation. In contrast, Amazon’s shoe images tend to vary only with camera perspective for a given product, making this data useful for sanity-checking our approach.
Faces From the aligned face images in the MS-Celeb-1M dataset, we select 12,500 celebrities at random and 8 associated images of each, resizing them to 64x64 pixels. We split the celebrities into subsets of 10,000 (training), 1,250 (validation) and 1,250 (test). The dataset has a small number of duplicate images and some label noise (images matched to the wrong celebrity). We detect and
remove duplicates by hashing the images, but we do not rid the data of label noise. We scale the pixel values to [−1, 1], performing no additional preprocessing or data augmentation.
Shoes Synthesizing novel product images is another promising domain for our method. In our shoes dataset, product photographs are captured against white backgrounds and primarily differ in orientation and distance. Accordingly, we expect that SD-GAN training will allocate the observation latent space to capture these aspects. We choose to study shoes as a prototypical example of a category of product images. The Amazon dataset contains around 3,000 unique products with the category “Shoe” and multiple product images. We use the same 80%, 10%, 10% split and again hash the images to ensure that the splits are disjoint. There are 6.2 photos of each product on average.
3.1 TRAINING DETAILS
We train SD-DCGANs on both of our datasets for 500,000 iterations using batches of 16 identitymatched pairs. To optimize SD-DCGAN, we use the Adam optimizer (Kingma & Ba, 2015) with hyperparameters α = 2e−4, β1 = 0.5, β2 = 0.999 as recommended by Radford et al. (2016). We also consider a non-Siamese discriminator that simply stacks the channels of the pair of real or fake images before encoding (SD-DCGAN-SC).
As in (Radford et al., 2016), we sample latent vectors z ∼ Uniform([−1, 1]100). For SD-GANs, we partition the latent codes according to zI ∈ RdI , zO ∈ R100−dI using values of dI = [25, 50, 75]. Our algorithm can be trivially applied with k-wise training (vs. pairwise). To explore the effects of using k > 2, we also experiment with an SD-DCGAN where we sample k = 4 instances each from PG(z | ZI = zI) for some zI ∈ ZI and from PR(x | I = i) for some i ∈ I. For all experiments, unless otherwise stated, we use dI = 50 and k = 2.
We also train an SD-BEGAN on both of our datasets. The increased complexity of the SD-BEGAN model significantly increases training time, limiting our ability to perform more-exhaustive hyperparameter validation (as we do for SD-DCGAN). We use the Adam optimizer with the default hyperparameters from (Kingma & Ba, 2015) for our SD-BEGAN experiments. While results from our SD-DCGAN k = 4 model are compelling, an experiment with a k = 4 variant of SD-BEGAN resulted in early mode collapse (Appendix F); hence, we excluded SD-BEGAN k = 4 from our evaluation.
We also compare to a DCGAN architecture trained using the auxiliary classifier GAN (AC-GAN) method (Odena et al., 2017). AC-GAN differs from SD-GAN in two key ways: 1) random identity codes zI are replaced by a one-hot embedding over all the identities in the training set (matrix of size 10000x50); 2) the AC-GAN method encourages that generated photos depict the proper identity by tasking its discriminator with predicting the identity of the generated or real image. Unlike SD-GANs, the AC-DCGAN model cannot imagine new identities; when generating from AC-DCGAN (for our quantitative comparisons to SD-GANs), we must sample a random identity from those existing in the training data.
3.2 EVALUATION
The evaluation of generative models is a fraught topic. Quantitative measures of sample quality can be poorly correlated with each other (Theis et al., 2016). Accordingly, we design an evaluation to match conceivable uses of our algorithm. Because we hope to produce diverse samples that humans deem to depict the same person, we evaluate the identity coherence of SD-GANs and baselines using both a pretrained face verification model and crowd-sourced human judgments obtained through Amazon’s Mechanical Turk platform.
3.2.1 QUANTITATIVE
Recent advancements in face verification using deep convolutional neural networks (Schroff et al., 2015; Parkhi et al., 2015; Wen et al., 2016) have yielded accuracy rivaling humans. For our evaluation, we procure FaceNet, a publicly-available face verifier based on the Inception-ResNet architecture (Szegedy et al., 2017). The FaceNet model was pretrained on the CASIA-WebFace dataset (Yi et al., 2014) and achieves 98.6% accuracy on the LFW benchmark (Huang et al., 2012).3
FaceNet ingests normalized, 160x160 color images and produces an embedding f(x) ∈ R128. The training objective for FaceNet is to learn embeddings that minimize the L2 distance between matched pairs of faces and maximize the distance for mismatched pairs. Accordingly, the embedding space yields a function for measuring the similarity between two faces x1 and x2: D(x1,x2) = ||f(x1) − f(x2)||22. Given two images, x1 and x2, we label them as a match if D(x1,x2) ≤ τv where τv is the accuracy-maximizing threshold on a class-balanced set of pairs from MS-Celeb-1M validation data. We use the same threshold for evaluating both real and synthetic data with FaceNet.
We compare the performance of FaceNet on pairs of images from the MS-Celeb-1M test set against generated samples from our trained SD-GAN models and AC-DCGAN baseline. To match FaceNet’s training data, we preprocess all images by resizing from 64x64 to 160x160, normalizing each image individually. We prepare 10,000 pairs from MS-Celeb-1M, half identity-matched and half unmatched. From each generative model, we generate 5,000 pairs each with z1I = z 2 I and 5,000 pairs with z1I 6= z2I . For each sample, we draw observation vectors zO randomly. We also want to ensure that identity-matched images produced by the generative models are diverse. To this end, we propose an intra-identity sample diversity (ID-Div) metric. The multi-scale structural similarity (MS-SSIM) (Wang et al., 2004) metric reports the similarity of two images on a scale from 0 (no resemblance) to 1 (identical images). We report 1 minus the mean MS-SSIM for all pairs
3“20170214-092102” pretrained model from https://github.com/davidsandberg/facenet
of identity-matched images as ID-Div. To measure the overall sample diversity (All-Div), we also compute 1 minus the mean similarity of 10k pairs with random identities.
In Table 1, we report the area under the receiver operating characteristic curve (AUC), accuracy, and false accept rate (FAR) of FaceNet (at threshold τv) on the real and generated data. We also report our proposed diversity statistics. FaceNet verifies pairs from the real data with 87% accuracy compared to 86% on pairs from our SD-BEGAN model. Though this is comparable to the accuracy achieved on pairs from the AC-DCGAN baseline, our model produces samples that are more diverse in pixel space (as measured by ID-Div and All-Div). FaceNet has a higher but comparable FAR for pairs from SD-GANs than those from AC-DCGAN; this indicates that SD-GANs may produce images that are less semantically diverse on average than AC-DCGAN.
We also report the combined memory footprint ofG and D for all methods in Table 1. For conditional GAN approaches, the number of parameters grows linearly with the number of identities in the training data. Especially in the case of the AC-GAN, where the discriminator computes a softmax over all identities, linear scaling may be prohibitive. While our 10k-identity subset of MS-Celeb-1M requires a 131MB AC-DCGAN model, an AC-DCGAN for all 1M identities would be over 8GB, with more than 97% of the parameters devoted to the weights in the discriminator’s softmax layer. In contrast, the complexity of SD-GAN is constant in the number of identities.
3.2.2 QUALITATIVE
In addition to validating that identity-matched SD-GAN samples are verified by FaceNet, we also demonstrate that humans are similarly convinced through experiments using Mechanical Turk. For these experiments, we use balanced subsets of 1,000 pairs from MS-Celeb-1M and the most promising generative methods from our FaceNet evaluation. We ask human annotators to determine if each pair depicts the “same person” or “different people”. Annotators are presented with batches of ten pairs at a time. Each pair is presented to three distinct annotators and predictions are determined by majority vote. Additionally, to provide a benchmark for assessing the quality of the Mechanical Turk ensembles, we (the authors) manually judged 200 pairs from MS-Celeb-1M. Results are in Table 1.
For all datasets, human annotators on Mechanical Turk answered “same person” less frequently than FaceNet when the latter uses the accuracy-maximizing threshold τv. Even on real data, balanced so that 50% of pairs are identity-matched, annotators report “same person” only 28% of the time (compared to the 41% of FaceNet). While annotators achieve higher accuracy on pairs from ACDCGAN than pairs from SD-BEGAN, they also answer “same person” 16% more often for ACDCGAN pairs than real data. In contrast, annotators answer “same person” at the same rate for SD-BEGAN pairs as real data. This may be attributable to the lower sample diversity produced by AC-DCGAN. Samples from SD-DCGAN and SD-BEGAN are shown in Figures 3 and 1 respectively.
4 RELATED WORK
Style transfer and novel view synthesis are active research areas. Early attempts to disentangle style and content manifolds used factored tensor representations (Tenenbaum & Freeman, 1997; Vasilescu & Terzopoulos, 2002; Elgammal & Lee, 2004; Tang et al., 2013), applying their results to face image synthesis. More recent work focuses on learning hierarchical feature representations using deep convolutional neural networks to separate identity and pose manifolds for faces (Zhu et al., 2013; Reed et al., 2014; Zhu et al., 2014; Yang et al., 2015; Kulkarni et al., 2015; Oord et al., 2016; Yan et al., 2016) and products (Dosovitskiy et al., 2015). Gatys et al. (2016) use features of a convolutional network, pretrained for image recognition, as a means for discovering content and style vectors.
Since their introduction (Goodfellow et al., 2014), GANs have been used to generate increasingly highquality images (Radford et al., 2016; Zhao et al., 2017b; Berthelot et al., 2017). Conditional GANs (cGANs), introduced by Mirza & Osindero (2014), extend GANs to generate class-conditional data. Odena et al. (2017) propose auxiliary classifier GANs, combining cGANs with a semi-supervised discriminator (Springenberg, 2015). Recently, cGANs have been used to ingest text (Reed et al., 2016) and full-resolution images (Isola et al., 2017; Liu et al., 2017; Zhu et al., 2017) as conditioning information, addressing a variety of image-to-image translation and style transfer tasks. Chen et al. (2016) devise an information-theoretic extension to GANs in which they maximize the mutual information between a subset of latent variables and the generated data. Their unsupervised method
appears to disentangle some intuitive factors of variation, but these factors may not correspond to those explicitly disentangled by SD-GANs.
Several related papers use GANs for novel view synthesis of faces. Tran et al. (2017); Huang et al. (2017); Yin et al. (2017a;b); Zhao et al. (2017a) all address synthesis of different body/facial poses conditioned on an input image (representing identity) and a fixed number of pose labels. Antipov et al. (2017) propose conditional GANs for synthesizing artificially-aged faces conditioned on both a face image and an age vector. These approaches all require explicit conditioning on the relevant factor (such as rotation, lighting and age) in addition to an identity image. In contrast, SD-GANs can model these contingent factors implicitly (without supervision).
Mathieu et al. (2016) combine GANs with a traditional reconstruction loss to disentangle identity. While their approach trains with an encoder-decoder generator, they enforce a variational bound on the encoder embedding, enabling them to sample from the decoder without an input image. Experiments with their method only address small (28x28) grayscale face images, and their training procedure is complex to reproduce. In contrast, our work offers a simpler approach and can synthesize higher-resolution, color photographs.
One might think of our work as offering the generative view of the Siamese networks often favored for learning similarity metrics (Bromley, 1994; Chopra et al., 2005). Such approaches are used for discriminative tasks like face or signature verification that share the many classes with few examples structure that we study here. In our work, we adopt a Siamese architecture in order to enable the discriminator to differentiate between matched and unmatched pairs. Recent work by Liu & Tuzel (2016) propose a GAN architecture with weight sharing across multiple generators and discriminators, but with a different problem formulation and objective from ours.
5 DISCUSSION
Our evaluation demonstrates that SD-GANs can disentangle those factors of variation corresponding to identity from the rest. Moreover, with SD-GANs we can sample never-before-seen identities, a benefit not shared by conditional GANs. In Figure 3, we demonstrate that by varying the observation vector zO, SD-GANs can change the color of clothing, add or remove sunnies, or change facial pose. They can also perturb the lighting, color saturation, and contrast of an image, all while keeping the apparent identity fixed. We note, subjectively, that samples from SD-DCGAN tend to appear less photorealistic than those from SD-BEGAN. Given a generator trained with SD-GAN, we can independently interpolate along the identity and observation manifolds (Figure 5).
On the shoe dataset, we find that the SD-DCGAN model produces convincing results. As desired, manipulating zI while keeping zO fixed yields distinct shoes in consistent poses (Figure 4). The identity code zI appears to capture the broad categories of shoes (sneakers, flip-flops, boots, etc.). Surprisingly, neither original BEGAN nor SD-BEGAN can produce diverse shoe images (Appendix G).
In this paper, we presented SD-GANs, a new algorithm capable of disentangling factors of variation according to known commonalities. We see several promising directions for future work. One logical extension is to disentangle latent factors corresponding to more than one known commonality. We also plan to apply our approach in other domains such as identity-conditioned speech synthesis.
ACKNOWLEDGEMENTS
The authors would like to thank Anima Anandkumar, John Berkowitz and Miller Puckette for their helpful feedback on this work. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI1053575 (Towns et al., 2014). GPUs used in this research were donated by the NVIDIA Corporation.
A ESTIMATING LATENT CODES
We estimate latent vectors for unseen images and demonstrate that the disentangled representations of SD-GANs can be used to depict the estimated identity with different contingent factors. In order to find a latent vector ẑ such that G(ẑ) (pretrained G) is similar to an unseen image x, we can minimize the distance between x and G(ẑ): minẑ ||G(ẑ)− x||22 (Lipton & Tripathi, 2017). In Figure 6, we depict estimation and linear interpolation across both subspaces for two pairs of images using SD-BEGAN. We also display the corresponding source images being estimated. For both pairs, ẑI (identity) is consistent in each row and ẑO (observation) is consistent in each column.
B PAIRWISE DISCRIMINATION OF EMBEDDINGS AND ENCODINGS
In Section 3.1, we describe an AC-GAN (Odena et al., 2017) baseline which uses an embedding matrix over real identities as latent identity codes (G : i, zO 7→ x̂). In place of random identity vectors, we tried combining this identity representation with pairwise discrimination (in the style of SD-GAN). In this experiment, the discriminator receives either either two real images with the same identity (x1i ,x 2 i ), or a real image with label i and synthetic image with label i (x 1 i , G(i, zO)). All other hyperparameters are the same as in our SD-DCGAN experiment (Section 3.1). We show results in Figure 7.
In Appendix C, we detail a modification of the DR-GAN (Tran et al., 2017) method which uses an encoding network Ge to transform images to identity representations (Gd : Ge(x), zO 7→ x̂). We also tried combining this encoder-decoder approach with pairwise discrimination. The discriminator
receives either two real images with the same identity (x1i ,x 2 i ), or (x 1 i , Gd(Ge(x 1 i ), zO). We show results in Figure 8.
While these experiments are exploratory and not part of our principle investigation, we find the results to be qualitatively promising. We are not the first to propose pairwise discrimination with pairs of (real, real) or (real, fake) images in GANs (Pathak et al., 2016; Isola et al., 2017).
C EXPLORATORY EXPERIMENT WITH DR-GANS
Tran et al. (2017) propose Disentangled Representation learning-GAN (DR-GAN), an approach to face frontalization with similar setup to our SD-GAN algorithm. The (single-image) DR-GAN generatorG (composition ofGe andGd) accepts an input image x, a pose code c, and a noise vector z. The DR-GAN discriminator receives either x or x̂ = Gd(Ge(x), c, z). In the style of (Springenberg, 2015), the discriminator is tasked with determining not only if the image is real or fake, but also classifying the pose c, suggesting a disentangled representation to the generator. Through their experiments, they demonstrate that DR-GAN can explicitly disentangle pose and illumination (c) from the rest of the latent space (Ge(x); z).
In addition to our AC-DCGAN baseline (Odena et al., 2017), we tried modifying DR-GAN to only disentangle identity (rather than both identity and pose in the original paper). We used the DCGAN (Radford et al., 2016) discriminator architecture (Table 4) as Ge, linearly projecting the final convolutional layer to Ge(x) ∈ R50 (in alignment with our SD-GAN experiments). We altered the discriminator to predict the identity of x or x̂, rather than pose information (which is unknown in our experimental setup). With these modifications, Ge(x) is analogous to zI in the SD-GAN generator, and z is analogous to zO. Furthermore, this setup is identical to the AC-DCGAN baseline
except that the embedding matrix is replaced by an encoding network Ge. Unfortunately, we found that the generator quickly learned to produce a single output image x̂ for each input x regardless of observation code z (Figure 9). Accordingly, we excluded this experiment from our evaluation (Section 3.2).
D IMAGINING IDENTITIES WITH AC-GAN
As stated in Section 3.1, AC-GANs Odena et al. (2017) provide no obvious way to imagine new identities. For our evaluation (Section 3.2), the AC-GAN generator receives identity input zI ∈ [0, 1]10000: a one-hot over all identities. One possible approach to imagining new identities would be to query a trained AC-GAN generator with a random vector zI such that ∑10000 i=1 zI[i] = 1. We found that this strategy produced little identity variety (Figure 10) compared to the normal one-hot strategy (Figure 11) and excluded it from our evaluation.
E ARCHITECTURE DESCRIPTIONS
We list here the full architectural details for our SD-DCGAN and SD-BEGAN models. In these descriptions, k is the number of images that the generator produces and discriminator observes per identity (usually 2 for pairwise training), and dI is the number of dimensions in the latent space ZI (identity). In our experiments, dimensionality of ZO is always 100− dI . As a concrete example, the bottleneck layer of the SD-BEGAN discriminator autoencoder (“fc2” in Table 6) with k = 2, dI = 50 has output dimensionality 150.
We emphasize that generators are parameterized by k in the tables only for clarity and symmetry with the discriminators. Implementations need not modify the generator; instead, k can be collapsed into the batch size.
For the stacked-channels versions of these discriminators, we simply change the number of input image channels from 3 to 3k and set k = 1 wherever k appears in the table.
F FACE SAMPLES
We present samples from each model reported in Table 1 for qualitative comparison. In each matrix, zI is the same across all images in a row and zO is the same across all images in a column. We draw identity and observation vectors randomly for these samples.
G SHOE SAMPLES
We present samples from an SD-DCGAN and SD-BEGAN trained on our shoes dataset. | 1. What is the focus of the paper regarding controlled image generation?
2. How does the proposed algorithm differ from other coupled generation pipelines like CoGAN?
3. What are the strengths and weaknesses of the proposed approach in terms of technical quality and novelty?
4. Do you have any concerns regarding the comparison between SD-GAN and AC-DCGAN?
5. Are there any missing references or discussions regarding similar works in conditional image generation and coupled image generation? | Review | Review
Summary:
This paper investigated the problem of controlled image generation. Assuming images can be disentangled by identity-related factors and style factors, this paper proposed an algorithm that produces a pair of images with the same identity. Compared to standard GAN framework, this algorithm first generated two latent variables for the pair images. The two latent variables are partially shared reflecting the shared identity information. The generator then transformed the latent variables into high-resolution images with a deconvolution decoder networks. The discriminator was used to distinguish paired images from database or paired images sampled by the algorithm. Experiments were conducted using DCGAN and BEGAN on portrait images and shoe product images. Qualitative results demonstrated that the learned style representations capture viewpoint, illumination and background color while the identity was well preserved by the identity-related representations.
== Novelty & Significance ==
Paired image generation is an interesting topic but this has been explored to some extent. Compared to existing coupled generation pipeline such as CoGAN, I can see the proposed formulation is more application-driven.
== Technical Quality ==
In Figure 3, the portrait images in the second row and fourth row look quite similar. I wonder if the trained model works with only limited variability (in terms of identity).
In Figure 4, the viewpoint is quite limited (only 4 viewpoints are provided).
I am not very convinced whether SD-GAN is a generic algorithm for controlled image generation. Based on the current results, I suspect it only works in fairly constrained settings.
It would be good to know if it actually works in more challenging datasets such as SUN bedroom, CUB and Oxford Flowers.
“the AC-DCGAN model cannot imagine new identities”
I feel the author of this paper made an unfair argument when comparing AC-DCGAN with the proposed method. First, during training, the proposed SD-GAN needs to access the identity information and there is only limited identity in the dataset. Based on the presentation, it is not very clear how does the model generate novel identities (in contrast to simply interpolating existing identities). For example, is it possible to generate novel viewpoints in Figure 4?
Missing references on conditional image generation and coupled image generation:
-- Generative Adversarial Text-to-Image Synthesis. Reed et al., In ICML 2016.
-- Attribute2Image: Conditional Image Generation from Visual Attributes. Yan et al., In ECCV 2016.
-- Domain Separation Networks. Bousmalis et al., In NIPS 2016.
-- Unsupervised Image-to-Image Translation Networks. Liu et al., In NIPS 2017.
Overall, I rate this paper slightly above borderline. It showed some good visualization results on controlled image generation. But the comparison to AC-GAN is not very fair, since the identity pairs are fully supervised for the proposed method. As far as I can see, there are no clear-cut improvements quantitatively. Also, there is no comparison with CoGAN, which I believe is the most relevant work for coupled image generation. |
ICLR | Title
Semantically Decomposing the Latent Spaces of Generative Adversarial Networks
Abstract
We propose a new algorithm for training generative adversarial networks that jointly learns latent codes for both identities (e.g. individual humans) and observations (e.g. specific photographs). By fixing the identity portion of the latent codes, we can generate diverse images of the same subject, and by fixing the observation portion, we can traverse the manifold of subjects while maintaining contingent aspects such as lighting and pose. Our algorithm features a pairwise training scheme in which each sample from the generator consists of two images with a common identity code. Corresponding samples from the real dataset consist of two distinct photographs of the same subject. In order to fool the discriminator, the generator must produce pairs that are photorealistic, distinct, and appear to depict the same individual. We augment both the DCGAN and BEGAN approaches with Siamese discriminators to facilitate pairwise training. Experiments with human judges and an off-the-shelf face verification system demonstrate our algorithm’s ability to generate convincing, identity-matched photographs.
1 INTRODUCTION
In many domains, a suitable generative process might consist of several stages. To generate a photograph of a product, we might wish to first sample from the space of products, and then from the space of photographs of that product. Given such disentangled representations in a multistage generative process, an online retailer might diversify its catalog, depicting products in a wider variety of settings. A retailer could also flip the process, imagining new products in a fixed setting. Datasets for such domains often contain many labeled identities with fewer observations of each (e.g. a collection of face portraits with thousands of people and ten photos of each). While we may know the identity of the subject in each photograph, we may not know the contingent aspects of the observation (such as lighting, pose and background). This kind of data is ubiquitous; given a set of commonalities, we might want to incorporate this structure into our latent representations.
Generative adversarial networks (GANs) learn mappings from latent codes z in some low-dimensional space Z to points in the space of natural data X (Goodfellow et al., 2014). They achieve this power through an adversarial training scheme pitting a generative model G : Z 7→ X against a discriminative model D : X 7→ [0, 1] in a minimax game. While GANs are popular, owing to their ability to generate high-fidelity images, they do not, in their original form, explicitly disentangle the latent factors according to known commonalities.
In this paper, we propose Semantically Decomposed GANs (SD-GANs), which encourage a specified portion of the latent space to correspond to a known source of variation.1,2 The technique
1Web demo: https://chrisdonahue.github.io/sdgan 2Source code: https://github.com/chrisdonahue/sdgan
decomposes the latent code Z into one portion ZI corresponding to identity, and the remaining portion ZO corresponding to the other contingent aspects of observations. SD-GANs learn through a pairwise training scheme in which each sample from the real dataset consists of two distinct images with a common identity. Each sample from the generator consists of a pair of images with common zI ∈ ZI but differing zO ∈ ZO. In order to fool the discriminator, the generator must not only produce diverse and photorealistic images, but also images that depict the same identity when zI is fixed. For SD-GANs, we modify the discriminator so that it can determine whether a pair of samples constitutes a match.
As a case study, we experiment with a dataset of face photographs, demonstrating that SD-GANs can generate contrasting images of the same subject (Figure 1; interactive web demo in footnote on previous page). The generator learns that certain properties are free to vary across observations but not identity. For example, SD-GANs learn that pose, facial expression, hirsuteness, grayscale vs. color, and lighting can all vary across different photographs of the same individual. On the other hand, the aspects that are more salient for facial verification remain consistent as we vary the observation code zO. We also train SD-GANs on a dataset of product images, containing multiple photographs of each product from various perspectives (Figure 4).
We demonstrate that SD-GANs trained on faces generate stylistically-contrasting, identity-matched image pairs that human annotators and a state-of-the-art face verification algorithm recognize as depicting the same subject. On measures of identity coherence and image diversity, SD-GANs perform comparably to a recent conditional GAN method (Odena et al., 2017); SD-GANs can also imagine new identities, while conditional GANs are limited to generating existing identities from the training data.
2 SEMANTICALLY DECOMPOSED GENERATIVE ADVERSARIAL NETWORKS
Before introducing our algorithm, we briefly review the prerequisite concepts.
2.1 GAN PRELIMINARIES
GANs leverage the discriminative power of neural networks to learn generative models. The generative model G ingests latent codes z, sampled from some known prior PZ , and produces G(z), a sample of an implicit distribution PG. The learning process consists of a minimax game between G, parameterized by θG, and a discriminative modelD, parameterized by θD. In the original formulation, the discriminative model tries to maximize log likelihood, yielding
min G max D V (G,D) = Ex∼PR [logD(x)] +Ez∼PZ [log(1−D(G(z)))]. (1)
Training proceeds as follows: For k iterations, sample one minibatch from the real distribution PR and one from the distribution of generated images PG, updating discriminator weights θD to increase V (G,D) by stochastic gradient ascent. Then sample a minibatch from PZ , updating θG to decrease V (G,D) by stochastic gradient descent.
Algorithm 1 Semantically Decomposed GAN Training 1: for n in 1:NumberOfIterations do 2: for m in 1:MinibatchSize do 3: Sample one identity vector zI ∼ Uniform([−1, 1]dI ). 4: Sample two observation vectors z1O, z 2 O ∼ Uniform([−1, 1]dO ).
5: z1 ← [zI ; z1O], z2 ← [zI ; z2O]. 6: Generate pair of images G(z1), G(z2), adding them to the minibatch with label 0 (fake). 7: for m in 1:MinibatchSize do 8: Sample one identity i ∈ I uniformly at random from the real data set. 9: Sample two images of i without replacement x1,x2 ∼ PR(x|I = i).
10: Add the pair to the minibatch, assigning label 1 (real). 11: Update discriminator weights by θD ← θD +∇θDV (G,D) using its stochastic gradient. 12: Sample another minibatch of identity-matched latent vectors z1, z2. 13: Update generator weights by stochastic gradient descent θG ← θG −∇θGV (G,D).
Zhao et al. (2017b) propose energy-based GANs (EBGANs), in which the discriminator can be viewed as an energy function. Specifically, they devise a discriminator consisting of an autoencoder: D(x) = Dd(De(x)). In the minimax game, the discriminator’s weights are updated to minimize the reconstruction error L(x) = ||x − D(x)|| for real data, while maximizing the error L(G(z)) for the generator. More recently, Berthelot et al. (2017) extend this work, introducing Boundary Equilibrium GANs (BEGANs), which optimize the Wasserstein distance (reminiscent of Wasserstein GANs (Arjovsky et al., 2017)) between autoencoder loss distributions, yielding the formulation:
VBEGAN (G,D) = L(x)− L(G(z)). (2)
Additionally, they introduce a method for stabilizing training. Positing that training becomes unstable when the discriminator cannot distinguish between real and generated images, they introduce a new hyperparameter γ, updating the value function on each iteration to maintain a desired ratio between the two reconstruction errors: E[L(G(z))] = γE[L(x)]. The BEGAN model produces what appear to us, subjectively, to be the sharpest images of faces yet generated by a GAN. In this work, we adapt both the DCGAN (Radford et al., 2016) and BEGAN algorithms to the SD-GAN training scheme.
2.2 SD-GAN FORMULATION
Consider the data’s identity as a random variable I in a discrete index set I . We seek to learn a latent representation that conveniently decomposes the variation in the real data into two parts: 1) due to I , and 2) due to the other factors of variation in the data, packaged as a random variable O. Ideally, the decomposition of the variation in the data into I and O should correspond exactly to a decomposition of the latent space Z = ZI ×ZO. This would permit convenient interpolation and other operations on the inferred subspaces ZI and ZO. A conventional GAN samples I,O from their joint distribution. Such a GAN’s generative model samples directly from an unstructured prior over the latent space. It does not disentangle the variation in O and I , for instance by modeling conditional distributions PG(O | I = i), but only models their average with respect to the prior on I .
Our SD-GAN method learns such a latent space decomposition, partitioning the coordinates of Z into two parts representing the subspaces, so that any z ∈ Z can be written as the concatenation [zI ; zO] of its identity representation zI ∈ RdI = ZI and its contingent aspect representation zO ∈ RdO = ZO. SD-GANs achieve this through a pairwise training scheme in which each sample from the real data consists of x1,x2 ∼ PR(x | I = i), a pair of images with a common identity i ∈ I . Each sample from the generator consists of G(z1), G(z2) ∼ PG(z | ZI = zI), a pair of images generated from a common identity vector zI ∈ ZI but i.i.d. observation vectors z1O, z2O ∈ ZO. We assign identity-matched pairs from PR the label 1 and zI -matched pairs from PG the label 0. The discriminator can thus learn to reject pairs for either of two primary reasons: 1) not photorealistic or 2) not plausibly depicting the same subject. See Algorithm 1 for SD-GAN training pseudocode.
2.3 SD-GAN DISCRIMINATOR ARCHITECTURE
With SD-GANs, there is no need to alter the architecture of the generator. However, the discriminator must now act upon two images, producing a single output. Moreover, the effects of the two input images x1,x2 on the output score are not independent. Two images might be otherwise photorealistic but deserve rejection because they clearly depict different identities. To this end, we devise two novel discriminator architectures to adapt DCGAN and BEGAN respectively. In both cases, we first separately encode each image using the same convolutional neural network De (Figure 2). We choose this Siamese setup (Bromley, 1994; Chopra et al., 2005) as our problem is symmetrical in the images, and thus it’s sensible to share weights between the encoders.
To adapt DCGAN, we stack the feature maps De(x1) and De(x2) along the channel axis, applying one additional strided convolution. This allows the network to further aggregate information from the two images before flattening and fully connecting to a sigmoid output. For BEGAN, because the discriminator is an autoencoder, our architecture is more complicated. After encoding each image, we concatenate the representations [De(x1);De(x2)] ∈ R2(dI+dO) and apply one fully connected bottleneck layer R2(dI+dO) ⇒ RdI+2dO with linear activation. In alignment with BEGAN, the SD-BEGAN bottleneck has the same dimensionality as the tuple of latent codes (zI , z1O, z 2 O) that generated the pair of images. Following the bottleneck, we apply a second FC layer RdI+2dO ⇒ R2(dI+dO), taking the first dI + dO components of its output to be the input to the first decoder and the second dI + dO components to be the input to the second decoder. The shared intermediate layer gives SD-BEGAN a mechanism to push apart matched and unmatched pairs. We specify our exact architectures in full detail in Appendix E.
3 EXPERIMENTS
We experimentally validate SD-GANs using two datasets: 1) the MS-Celeb-1M dataset of celebrity face images (Guo et al., 2016) and 2) a dataset of shoe images collected from Amazon (McAuley et al., 2015). Both datasets contain a large number of identities (people and shoes, respectively) with multiple observations of each. The “in-the-wild” nature of the celebrity face images offers a richer test bed for our method as both identities and contingent factors are significant sources of variation. In contrast, Amazon’s shoe images tend to vary only with camera perspective for a given product, making this data useful for sanity-checking our approach.
Faces From the aligned face images in the MS-Celeb-1M dataset, we select 12,500 celebrities at random and 8 associated images of each, resizing them to 64x64 pixels. We split the celebrities into subsets of 10,000 (training), 1,250 (validation) and 1,250 (test). The dataset has a small number of duplicate images and some label noise (images matched to the wrong celebrity). We detect and
remove duplicates by hashing the images, but we do not rid the data of label noise. We scale the pixel values to [−1, 1], performing no additional preprocessing or data augmentation.
Shoes Synthesizing novel product images is another promising domain for our method. In our shoes dataset, product photographs are captured against white backgrounds and primarily differ in orientation and distance. Accordingly, we expect that SD-GAN training will allocate the observation latent space to capture these aspects. We choose to study shoes as a prototypical example of a category of product images. The Amazon dataset contains around 3,000 unique products with the category “Shoe” and multiple product images. We use the same 80%, 10%, 10% split and again hash the images to ensure that the splits are disjoint. There are 6.2 photos of each product on average.
3.1 TRAINING DETAILS
We train SD-DCGANs on both of our datasets for 500,000 iterations using batches of 16 identitymatched pairs. To optimize SD-DCGAN, we use the Adam optimizer (Kingma & Ba, 2015) with hyperparameters α = 2e−4, β1 = 0.5, β2 = 0.999 as recommended by Radford et al. (2016). We also consider a non-Siamese discriminator that simply stacks the channels of the pair of real or fake images before encoding (SD-DCGAN-SC).
As in (Radford et al., 2016), we sample latent vectors z ∼ Uniform([−1, 1]100). For SD-GANs, we partition the latent codes according to zI ∈ RdI , zO ∈ R100−dI using values of dI = [25, 50, 75]. Our algorithm can be trivially applied with k-wise training (vs. pairwise). To explore the effects of using k > 2, we also experiment with an SD-DCGAN where we sample k = 4 instances each from PG(z | ZI = zI) for some zI ∈ ZI and from PR(x | I = i) for some i ∈ I. For all experiments, unless otherwise stated, we use dI = 50 and k = 2.
We also train an SD-BEGAN on both of our datasets. The increased complexity of the SD-BEGAN model significantly increases training time, limiting our ability to perform more-exhaustive hyperparameter validation (as we do for SD-DCGAN). We use the Adam optimizer with the default hyperparameters from (Kingma & Ba, 2015) for our SD-BEGAN experiments. While results from our SD-DCGAN k = 4 model are compelling, an experiment with a k = 4 variant of SD-BEGAN resulted in early mode collapse (Appendix F); hence, we excluded SD-BEGAN k = 4 from our evaluation.
We also compare to a DCGAN architecture trained using the auxiliary classifier GAN (AC-GAN) method (Odena et al., 2017). AC-GAN differs from SD-GAN in two key ways: 1) random identity codes zI are replaced by a one-hot embedding over all the identities in the training set (matrix of size 10000x50); 2) the AC-GAN method encourages that generated photos depict the proper identity by tasking its discriminator with predicting the identity of the generated or real image. Unlike SD-GANs, the AC-DCGAN model cannot imagine new identities; when generating from AC-DCGAN (for our quantitative comparisons to SD-GANs), we must sample a random identity from those existing in the training data.
3.2 EVALUATION
The evaluation of generative models is a fraught topic. Quantitative measures of sample quality can be poorly correlated with each other (Theis et al., 2016). Accordingly, we design an evaluation to match conceivable uses of our algorithm. Because we hope to produce diverse samples that humans deem to depict the same person, we evaluate the identity coherence of SD-GANs and baselines using both a pretrained face verification model and crowd-sourced human judgments obtained through Amazon’s Mechanical Turk platform.
3.2.1 QUANTITATIVE
Recent advancements in face verification using deep convolutional neural networks (Schroff et al., 2015; Parkhi et al., 2015; Wen et al., 2016) have yielded accuracy rivaling humans. For our evaluation, we procure FaceNet, a publicly-available face verifier based on the Inception-ResNet architecture (Szegedy et al., 2017). The FaceNet model was pretrained on the CASIA-WebFace dataset (Yi et al., 2014) and achieves 98.6% accuracy on the LFW benchmark (Huang et al., 2012).3
FaceNet ingests normalized, 160x160 color images and produces an embedding f(x) ∈ R128. The training objective for FaceNet is to learn embeddings that minimize the L2 distance between matched pairs of faces and maximize the distance for mismatched pairs. Accordingly, the embedding space yields a function for measuring the similarity between two faces x1 and x2: D(x1,x2) = ||f(x1) − f(x2)||22. Given two images, x1 and x2, we label them as a match if D(x1,x2) ≤ τv where τv is the accuracy-maximizing threshold on a class-balanced set of pairs from MS-Celeb-1M validation data. We use the same threshold for evaluating both real and synthetic data with FaceNet.
We compare the performance of FaceNet on pairs of images from the MS-Celeb-1M test set against generated samples from our trained SD-GAN models and AC-DCGAN baseline. To match FaceNet’s training data, we preprocess all images by resizing from 64x64 to 160x160, normalizing each image individually. We prepare 10,000 pairs from MS-Celeb-1M, half identity-matched and half unmatched. From each generative model, we generate 5,000 pairs each with z1I = z 2 I and 5,000 pairs with z1I 6= z2I . For each sample, we draw observation vectors zO randomly. We also want to ensure that identity-matched images produced by the generative models are diverse. To this end, we propose an intra-identity sample diversity (ID-Div) metric. The multi-scale structural similarity (MS-SSIM) (Wang et al., 2004) metric reports the similarity of two images on a scale from 0 (no resemblance) to 1 (identical images). We report 1 minus the mean MS-SSIM for all pairs
3“20170214-092102” pretrained model from https://github.com/davidsandberg/facenet
of identity-matched images as ID-Div. To measure the overall sample diversity (All-Div), we also compute 1 minus the mean similarity of 10k pairs with random identities.
In Table 1, we report the area under the receiver operating characteristic curve (AUC), accuracy, and false accept rate (FAR) of FaceNet (at threshold τv) on the real and generated data. We also report our proposed diversity statistics. FaceNet verifies pairs from the real data with 87% accuracy compared to 86% on pairs from our SD-BEGAN model. Though this is comparable to the accuracy achieved on pairs from the AC-DCGAN baseline, our model produces samples that are more diverse in pixel space (as measured by ID-Div and All-Div). FaceNet has a higher but comparable FAR for pairs from SD-GANs than those from AC-DCGAN; this indicates that SD-GANs may produce images that are less semantically diverse on average than AC-DCGAN.
We also report the combined memory footprint ofG and D for all methods in Table 1. For conditional GAN approaches, the number of parameters grows linearly with the number of identities in the training data. Especially in the case of the AC-GAN, where the discriminator computes a softmax over all identities, linear scaling may be prohibitive. While our 10k-identity subset of MS-Celeb-1M requires a 131MB AC-DCGAN model, an AC-DCGAN for all 1M identities would be over 8GB, with more than 97% of the parameters devoted to the weights in the discriminator’s softmax layer. In contrast, the complexity of SD-GAN is constant in the number of identities.
3.2.2 QUALITATIVE
In addition to validating that identity-matched SD-GAN samples are verified by FaceNet, we also demonstrate that humans are similarly convinced through experiments using Mechanical Turk. For these experiments, we use balanced subsets of 1,000 pairs from MS-Celeb-1M and the most promising generative methods from our FaceNet evaluation. We ask human annotators to determine if each pair depicts the “same person” or “different people”. Annotators are presented with batches of ten pairs at a time. Each pair is presented to three distinct annotators and predictions are determined by majority vote. Additionally, to provide a benchmark for assessing the quality of the Mechanical Turk ensembles, we (the authors) manually judged 200 pairs from MS-Celeb-1M. Results are in Table 1.
For all datasets, human annotators on Mechanical Turk answered “same person” less frequently than FaceNet when the latter uses the accuracy-maximizing threshold τv. Even on real data, balanced so that 50% of pairs are identity-matched, annotators report “same person” only 28% of the time (compared to the 41% of FaceNet). While annotators achieve higher accuracy on pairs from ACDCGAN than pairs from SD-BEGAN, they also answer “same person” 16% more often for ACDCGAN pairs than real data. In contrast, annotators answer “same person” at the same rate for SD-BEGAN pairs as real data. This may be attributable to the lower sample diversity produced by AC-DCGAN. Samples from SD-DCGAN and SD-BEGAN are shown in Figures 3 and 1 respectively.
4 RELATED WORK
Style transfer and novel view synthesis are active research areas. Early attempts to disentangle style and content manifolds used factored tensor representations (Tenenbaum & Freeman, 1997; Vasilescu & Terzopoulos, 2002; Elgammal & Lee, 2004; Tang et al., 2013), applying their results to face image synthesis. More recent work focuses on learning hierarchical feature representations using deep convolutional neural networks to separate identity and pose manifolds for faces (Zhu et al., 2013; Reed et al., 2014; Zhu et al., 2014; Yang et al., 2015; Kulkarni et al., 2015; Oord et al., 2016; Yan et al., 2016) and products (Dosovitskiy et al., 2015). Gatys et al. (2016) use features of a convolutional network, pretrained for image recognition, as a means for discovering content and style vectors.
Since their introduction (Goodfellow et al., 2014), GANs have been used to generate increasingly highquality images (Radford et al., 2016; Zhao et al., 2017b; Berthelot et al., 2017). Conditional GANs (cGANs), introduced by Mirza & Osindero (2014), extend GANs to generate class-conditional data. Odena et al. (2017) propose auxiliary classifier GANs, combining cGANs with a semi-supervised discriminator (Springenberg, 2015). Recently, cGANs have been used to ingest text (Reed et al., 2016) and full-resolution images (Isola et al., 2017; Liu et al., 2017; Zhu et al., 2017) as conditioning information, addressing a variety of image-to-image translation and style transfer tasks. Chen et al. (2016) devise an information-theoretic extension to GANs in which they maximize the mutual information between a subset of latent variables and the generated data. Their unsupervised method
appears to disentangle some intuitive factors of variation, but these factors may not correspond to those explicitly disentangled by SD-GANs.
Several related papers use GANs for novel view synthesis of faces. Tran et al. (2017); Huang et al. (2017); Yin et al. (2017a;b); Zhao et al. (2017a) all address synthesis of different body/facial poses conditioned on an input image (representing identity) and a fixed number of pose labels. Antipov et al. (2017) propose conditional GANs for synthesizing artificially-aged faces conditioned on both a face image and an age vector. These approaches all require explicit conditioning on the relevant factor (such as rotation, lighting and age) in addition to an identity image. In contrast, SD-GANs can model these contingent factors implicitly (without supervision).
Mathieu et al. (2016) combine GANs with a traditional reconstruction loss to disentangle identity. While their approach trains with an encoder-decoder generator, they enforce a variational bound on the encoder embedding, enabling them to sample from the decoder without an input image. Experiments with their method only address small (28x28) grayscale face images, and their training procedure is complex to reproduce. In contrast, our work offers a simpler approach and can synthesize higher-resolution, color photographs.
One might think of our work as offering the generative view of the Siamese networks often favored for learning similarity metrics (Bromley, 1994; Chopra et al., 2005). Such approaches are used for discriminative tasks like face or signature verification that share the many classes with few examples structure that we study here. In our work, we adopt a Siamese architecture in order to enable the discriminator to differentiate between matched and unmatched pairs. Recent work by Liu & Tuzel (2016) propose a GAN architecture with weight sharing across multiple generators and discriminators, but with a different problem formulation and objective from ours.
5 DISCUSSION
Our evaluation demonstrates that SD-GANs can disentangle those factors of variation corresponding to identity from the rest. Moreover, with SD-GANs we can sample never-before-seen identities, a benefit not shared by conditional GANs. In Figure 3, we demonstrate that by varying the observation vector zO, SD-GANs can change the color of clothing, add or remove sunnies, or change facial pose. They can also perturb the lighting, color saturation, and contrast of an image, all while keeping the apparent identity fixed. We note, subjectively, that samples from SD-DCGAN tend to appear less photorealistic than those from SD-BEGAN. Given a generator trained with SD-GAN, we can independently interpolate along the identity and observation manifolds (Figure 5).
On the shoe dataset, we find that the SD-DCGAN model produces convincing results. As desired, manipulating zI while keeping zO fixed yields distinct shoes in consistent poses (Figure 4). The identity code zI appears to capture the broad categories of shoes (sneakers, flip-flops, boots, etc.). Surprisingly, neither original BEGAN nor SD-BEGAN can produce diverse shoe images (Appendix G).
In this paper, we presented SD-GANs, a new algorithm capable of disentangling factors of variation according to known commonalities. We see several promising directions for future work. One logical extension is to disentangle latent factors corresponding to more than one known commonality. We also plan to apply our approach in other domains such as identity-conditioned speech synthesis.
ACKNOWLEDGEMENTS
The authors would like to thank Anima Anandkumar, John Berkowitz and Miller Puckette for their helpful feedback on this work. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI1053575 (Towns et al., 2014). GPUs used in this research were donated by the NVIDIA Corporation.
A ESTIMATING LATENT CODES
We estimate latent vectors for unseen images and demonstrate that the disentangled representations of SD-GANs can be used to depict the estimated identity with different contingent factors. In order to find a latent vector ẑ such that G(ẑ) (pretrained G) is similar to an unseen image x, we can minimize the distance between x and G(ẑ): minẑ ||G(ẑ)− x||22 (Lipton & Tripathi, 2017). In Figure 6, we depict estimation and linear interpolation across both subspaces for two pairs of images using SD-BEGAN. We also display the corresponding source images being estimated. For both pairs, ẑI (identity) is consistent in each row and ẑO (observation) is consistent in each column.
B PAIRWISE DISCRIMINATION OF EMBEDDINGS AND ENCODINGS
In Section 3.1, we describe an AC-GAN (Odena et al., 2017) baseline which uses an embedding matrix over real identities as latent identity codes (G : i, zO 7→ x̂). In place of random identity vectors, we tried combining this identity representation with pairwise discrimination (in the style of SD-GAN). In this experiment, the discriminator receives either either two real images with the same identity (x1i ,x 2 i ), or a real image with label i and synthetic image with label i (x 1 i , G(i, zO)). All other hyperparameters are the same as in our SD-DCGAN experiment (Section 3.1). We show results in Figure 7.
In Appendix C, we detail a modification of the DR-GAN (Tran et al., 2017) method which uses an encoding network Ge to transform images to identity representations (Gd : Ge(x), zO 7→ x̂). We also tried combining this encoder-decoder approach with pairwise discrimination. The discriminator
receives either two real images with the same identity (x1i ,x 2 i ), or (x 1 i , Gd(Ge(x 1 i ), zO). We show results in Figure 8.
While these experiments are exploratory and not part of our principle investigation, we find the results to be qualitatively promising. We are not the first to propose pairwise discrimination with pairs of (real, real) or (real, fake) images in GANs (Pathak et al., 2016; Isola et al., 2017).
C EXPLORATORY EXPERIMENT WITH DR-GANS
Tran et al. (2017) propose Disentangled Representation learning-GAN (DR-GAN), an approach to face frontalization with similar setup to our SD-GAN algorithm. The (single-image) DR-GAN generatorG (composition ofGe andGd) accepts an input image x, a pose code c, and a noise vector z. The DR-GAN discriminator receives either x or x̂ = Gd(Ge(x), c, z). In the style of (Springenberg, 2015), the discriminator is tasked with determining not only if the image is real or fake, but also classifying the pose c, suggesting a disentangled representation to the generator. Through their experiments, they demonstrate that DR-GAN can explicitly disentangle pose and illumination (c) from the rest of the latent space (Ge(x); z).
In addition to our AC-DCGAN baseline (Odena et al., 2017), we tried modifying DR-GAN to only disentangle identity (rather than both identity and pose in the original paper). We used the DCGAN (Radford et al., 2016) discriminator architecture (Table 4) as Ge, linearly projecting the final convolutional layer to Ge(x) ∈ R50 (in alignment with our SD-GAN experiments). We altered the discriminator to predict the identity of x or x̂, rather than pose information (which is unknown in our experimental setup). With these modifications, Ge(x) is analogous to zI in the SD-GAN generator, and z is analogous to zO. Furthermore, this setup is identical to the AC-DCGAN baseline
except that the embedding matrix is replaced by an encoding network Ge. Unfortunately, we found that the generator quickly learned to produce a single output image x̂ for each input x regardless of observation code z (Figure 9). Accordingly, we excluded this experiment from our evaluation (Section 3.2).
D IMAGINING IDENTITIES WITH AC-GAN
As stated in Section 3.1, AC-GANs Odena et al. (2017) provide no obvious way to imagine new identities. For our evaluation (Section 3.2), the AC-GAN generator receives identity input zI ∈ [0, 1]10000: a one-hot over all identities. One possible approach to imagining new identities would be to query a trained AC-GAN generator with a random vector zI such that ∑10000 i=1 zI[i] = 1. We found that this strategy produced little identity variety (Figure 10) compared to the normal one-hot strategy (Figure 11) and excluded it from our evaluation.
E ARCHITECTURE DESCRIPTIONS
We list here the full architectural details for our SD-DCGAN and SD-BEGAN models. In these descriptions, k is the number of images that the generator produces and discriminator observes per identity (usually 2 for pairwise training), and dI is the number of dimensions in the latent space ZI (identity). In our experiments, dimensionality of ZO is always 100− dI . As a concrete example, the bottleneck layer of the SD-BEGAN discriminator autoencoder (“fc2” in Table 6) with k = 2, dI = 50 has output dimensionality 150.
We emphasize that generators are parameterized by k in the tables only for clarity and symmetry with the discriminators. Implementations need not modify the generator; instead, k can be collapsed into the batch size.
For the stacked-channels versions of these discriminators, we simply change the number of input image channels from 3 to 3k and set k = 1 wherever k appears in the table.
F FACE SAMPLES
We present samples from each model reported in Table 1 for qualitative comparison. In each matrix, zI is the same across all images in a row and zO is the same across all images in a column. We draw identity and observation vectors randomly for these samples.
G SHOE SAMPLES
We present samples from an SD-DCGAN and SD-BEGAN trained on our shoes dataset. | 1. What is the main contribution of the paper on generative adversarial networks?
2. What are the strengths of the proposed approach, particularly in its ability to decompose semantical components?
3. How does the reviewer assess the novelty and effectiveness of the proposed SD-GAN model?
4. What are some concerns or questions raised by the reviewer regarding the model's ability to generate diverse identities and its comparison to other works?
5. How does the reviewer summarize the paper's content and contributions? | Review | Review
[Overview]
In this paper, the authors proposed a model called SD-GAN, to decompose semantical component of the input in GAN. Specifically, the authors proposed a novel architecture to decompose the identity latent code and non-identity latent code. In this new architecture, the generator is unchanged while the discriminator takes pair data as the input, and output the decision of whether two images are from the same identity or not. By training the whole model with a conventional GAN-training regime, SD-GAN learns to take a part of the input Z as the identity information, and the other part of input Z as the non-identity (or attribute) information. In the experiments, the authors demonstrate that the proposed SD-GAN could generate images preserving the same identity with diverse attributes, such as pose, age, expression, etc. Compared with AC-GAN, the proposed SD-GAN achieved better performance in both automatically evaluation metric (FaceNet) and Human Study. In the appendix, the authors further presented ablated qualitative results in various settings.
[Strengths]
1. This paper proposed a simple but effective generative adversarial network, called SD-GAN, to decompose the input latent code of GAN into separate semantical parts. Specifically, it is mainly instantiated on face images, to decompose the identity part and non-identity part in the latent code. Unlike the previous works such as AC-GAN, SD-GAN exploited a Siamese network to replace the conventional discriminator used in GAN. By this way, SD-GAN could generate images of novel identities, rather than being constrained to those identities used during training. I think this is a very good property. Due to this, SD-GAN consumes much less memory than AC-GAN, when training on a large number of identities.
2. In the experiment section, the authors quantitatively evaluate the generated images based on two methods, one is using a pre-trained FaceNet model to measure the verification accuracy and one is human study. When evaluated based on FaceNet, the proposed SD-GAN achieved higher accuracy and obtained more diverse face images, compared with AC-GAN. In human study, SD-GAN achieved comparable verification accuracy, while higher diversity than AC-GAN. The authors further presented ablated experiments in the Appendix.
[Comments]
This paper presents a novel model to decompose the latent code in a semantic manner. However, I have several questions about the model:
1. Why would SD-GAN not generate images merely have a smaller number of identities or just a few identities? In Algorithm 1, the authors trained the model by sampling one identity vector, which is then concatenated to two observation vectors. In this case, the generator always takes the same identity vectors, and the discriminator is used to distinguish these fake same-identity pair and the real same-identity pair from training data. As such, even if the generator generates the same identity, say mean identity, given different identity vectors, the generated images can still obtain a lower discrimination loss. Without any explicite constraint to enforce the generator to generate different identity with different identity vectors, I am wondering what makes SD-GAN be able to generate diverse identities?
2. Still about the identity diversity. Though the authors showed the identity-matched diversity in the experiments, the diversity across identity on the generated images is not evaluated. The authors should also evaluate this kind of identity. Generally, AC-GAN could generate as many identities as the number of identities in training data. I am curious about whether SD-GAN could generate comparable diverse identity to AC-GAN. One simple way is to evaluate the whole generated image set using Inception Score based on a Pre-trained face identification network; Another way is to directly use the generated images to train a verification model or identification model and evaluate it on real images. Though compared with AC-GAN, SD-GAN achieved better identity verification performance and sample diversity, I suspect the identity diversity is discounted, though SD-GAN has the property of generating novel identities. Furthermore, the authors should also compare the general quality of generated samples with DC-GAN and BEGAN as well (at least qualitatively), apart from the comparison to AC-GAN on the identity-matched generation.
3. When making the comparison with related work, the authors mentioned that Info-GAN was not able to determine which factors are assigned to each dimension. I think this is not precise. The lack of this property is because there are no data annotations. Given the data annotations, Info-GAN can be easily augmented with such property by sending the real images into the discriminator for classification. Also, there is a typo in the caption of Fig. 10. It looks like each column shares the same identity vector instead of each row.
[Summary]
This paper proposed a new model called SD-GAN to decompose the input latent code of GAN into two separete semantical parts, one for identity and one for observations. Unlike AC-GAN, SD-GAN exploited a Siamese architecture in discriminator. By this way, SD-GAN could not only generate more identity-matched face image pairs but also more diverse samples with the same identity, compared with AC-GAN. I think this is a good idea for decomposing the semantical parts in the latent code, in the sense that it can imagine new face identities and consumes less memory during training. Overall, I think this is a good paper. However, as I mentioned above, I am still not clear why SD-GAN could generate diverse identities without any constraints to make the model do that. Also, the authors should further evaluate the diversity of identity and compare it with AC-GAN. |
ICLR | Title
AdaFuse: Adaptive Temporal Fusion Network for Efficient Action Recognition
Abstract
Temporal modelling is the key for efficient video action recognition. While understanding temporal information can improve recognition accuracy for dynamic actions, removing temporal redundancy and reusing past features can significantly save computation leading to efficient action recognition. In this paper, we introduce an adaptive temporal fusion network, called AdaFuse, that dynamically fuses channels from current and past feature maps for strong temporal modelling. Specifically, the necessary information from the historical convolution feature maps is fused with current pruned feature maps with the goal of improving both recognition accuracy and efficiency. In addition, we use a skipping operation to further reduce the computation cost of action recognition. Extensive experiments on Something V1&V2, Jester and Mini-Kinetics show that our approach can achieve about 40% computation savings with comparable accuracy to state-of-the-art methods. The project page can be found at https://mengyuest.github.io/AdaFuse/
1 INTRODUCTION
Over the last few years, video action recognition has made rapid progress with the introduction of a number of large-scale video datasets (Carreira & Zisserman, 2017; Monfort et al., 2018; Goyal et al., 2017). Despite impressive results on commonly used benchmark datasets, efficiency remains a great challenge for many resource constrained applications due to the heavy computational burden of deep Convolutional Neural Network (CNN) models.
Motivated by the need of efficiency, extensive studies have been recently conducted that focus on either designing new lightweight architectures (e.g., R(2+1)D (Tran et al., 2018), S3D (Xie et al., 2018), channel-separated CNNs (Tran et al., 2019)) or selecting salient frames/clips conditioned on the input (Yeung et al., 2016; Wu et al., 2019b; Korbar et al., 2019; Gao et al., 2020). However, most of the existing approaches do not consider the fact that there exists redundancy in CNN features which can significantly save computation leading to more efficient action recognition. In particular, orthogonal to the design of compact models, the computational cost of a CNN model also has much to do with the redundancy of CNN features (Han et al., 2019). Furthermore, the amount of redundancy depends on the dynamics and type of events in the video: A set of still frames for a simple action (e.g. “Sleeping”) will have a higher redundancy comparing to a fast-changed action with rich interaction and deformation (e.g. “Pulling two ends of something so that it gets stretched”). Thus, based on the input we could compute just a subset of features, while the rest of the channels can reuse history feature maps or even be skipped without losing any accuracy, resulting in large computational savings compared to computing all the features at a given CNN layer. Based on this intuition, we present a new perspective for efficient action recognition by adaptively deciding what channels to compute or reuse, on a per instance basis, for recognizing complex actions.
In this paper, we propose AdaFuse, an adaptive temporal fusion network that learns a decision policy to dynamically fuse channels from current and history feature maps for efficient action recognition. Specifically, our approach reuses history features when necessary (i.e., dynamically decides which channels to keep, reuse or skip per layer and per instance) with the goal of improving both recognition
∗Email: [email protected]. This work was done while Yue was an AI Resident at IBM Research.
accuracy and efficiency. As these decisions are discrete and non-differentiable, we rely on a Gumbel Softmax sampling approach (Jang et al., 2016) to learn the policy jointly with the network parameters through standard back-propagation, without resorting to complex reinforcement learning as in (Wu et al., 2019b; Fan et al., 2018; Yeung et al., 2016). We design the loss to achieve both competitive performance and resource efficiency required for action recognition. Extensive experiments on multiple benchmarks show that AdaFuse significantly reduces the computation without accuracy loss.
The main contributions of our work are as follows:
• We propose a novel approach that automatically determines which channels to keep, reuse or skip per layer and per target instance for efficient action recognition.
• Our approach is model-agnostic, which allows this to be served as a plugin operation for a wide range of 2D CNN-based action recognition architectures.
• The overall policy distribution can be seen as an indicator for the dataset characteristic, and the block-level distribution can bring potential guidance for future architecture designs.
• We conduct extensive experiments on four benchmark datasets (Something-Something V1 (Goyal et al., 2017), Something-Something V2 (Mahdisoltani et al., 2018), Jester (Materzynska et al., 2019) and Mini-Kinetics (Kay et al., 2017)) to demonstrate the superiority of our proposed approach over state-of-the-art methods.
2 RELATED WORK
Action Recognition. Much progress has been made in developing a variety of ways to recognize complex actions, by either applying 2D-CNNs (Karpathy et al., 2014; Wang et al., 2016; Fan et al., 2019) or 3D-CNNs (Tran et al., 2015; Carreira & Zisserman, 2017; Hara et al., 2018). Most successful architectures are usually based on the two-stream model (Simonyan & Zisserman, 2014), processing RGB frames and optical-flow in two separate CNNs with a late fusion in the upper layers (Karpathy et al., 2014) or further combining with other modalities (Asghari-Esfeden et al., 2020; Li et al., 2020a). Another popular approach for CNN-based action recognition is the use of 2D-CNN to extract frame-level features and then model the temporal causality using different aggregation modules such as temporal averaging in TSN (Wang et al., 2016), a bag of features scheme in TRN (Zhou et al., 2018), channel shifting in TSM (Lin et al., 2019), depthwise convolutions in TAM (Fan et al., 2019), non-local neural networks (Wang et al., 2018a), temporal enhancement and interaction module in TEINet (Liu et al., 2020), and LSTMs (Donahue et al., 2015). Many variants of 3D-CNNs such as C3D (Tran et al., 2015; Ji et al., 2013), I3D (Carreira & Zisserman, 2017) and ResNet3D (Hara et al., 2018), that use 3D convolutions to model space and time jointly, have also been introduced for action recognition. SlowFast (Feichtenhofer et al., 2018) employs two pathways to capture temporal information by processing a video at both slow and fast frame rates. Recently, STM (Jiang et al., 2019) proposes new channel-wise convolutional blocks to jointly capture spatio-temporal and motion information in consecutive frames. TEA (Li et al., 2020b) introduces a motion excitation module including multiple temporal aggregation modules to capture both short- and long-range temporal evolution in videos. Gate-Shift networks (Sudhakaran et al., 2020) use spatial gating for spatial-temporal decomposition of 3D kernels in Inception-based architectures.
While extensive studies have been conducted in the last few years, limited efforts have been made towards efficient action recognition (Wu et al., 2019b;a; Gao et al., 2020). Specifically, methods for efficient recognition focus on either designing new lightweight architectures that aim to reduce the complexity by decomposing the 3D convolution into 2D spatial convolution and 1D temporal convolution (e.g., R(2+1)D (Tran et al., 2018), S3D (Xie et al., 2018), channel-separated CNNs (Tran et al., 2019)) or selecting salient frames/clips conditioned on the input (Yeung et al., 2016; Wu et al., 2019b; Korbar et al., 2019; Gao et al., 2020). Our approach is most related to the latter which focuses on conditional computation and is agnostic to the network architecture used for recognizing actions. However, instead of focusing on data sampling, our approach dynamically fuses channels from current and history feature maps to reduce the computation. Furthermore, as feature maps can be redundant or noisy, we use a skipping operation to make it more efficient for action recognition.
Conditional Computation. Many conditional computation methods have been recently proposed with the goal of improving computational efficiency (Bengio et al., 2015; 2013; Veit & Belongie, 2018; Wang et al., 2018b; Graves, 2016; Meng et al., 2020; Pan et al., 2021). Several works have been
proposed that add decision branches to different layers of CNNs to learn whether to exit the network for faster inference (Figurnov et al., 2017; McGill & Perona, 2017; Wu et al., 2020). BlockDrop (Wu et al., 2018) effectively reduces the inference time by learning to dynamically select which layers to execute per sample during inference. SpotTune (Guo et al., 2019) learns to adaptively route information through finetuned or pre-trained layers. Conditionally parameterized convolutions (Yang et al., 2019) or dynamic convolutions (Chen et al., 2019a; Verelst & Tuytelaars, 2019) have also been proposed to learn specialized convolutional kernels for each example to improve efficiency in image recognition. Our method is also related to recent works on dynamic channel pruning (Gao et al., 2018; Lin et al., 2017) that generate decisions to skip the computation for a subset of output channels. While GaterNet (Chen et al., 2019b) proposes a separate gating network to learn channel-wise binary gates for the backbone network, Channel gating network (Hua et al., 2019) identifies regions in the features that contribute less to the classification result, and skips the computation on a subset of the input channels for these ineffective regions. In contrast to the prior works that focus on only dropping unimportant channels, our proposed approach also reuses history features when necessary to make the network capable for strong temporal modelling.
3 METHODOLOGY
In this section, we first show the general approach using 2D-CNN for action recognition. Then we present the concept of adaptive temporal fusion and analyze its computation cost. Finally, we describe the end-to-end optimization and network specifications.
Using 2D-CNN for Action Recognition. One popular solution is to first generate frame-wise predictions and then utilize a consensus operation to get the final prediction (Wang et al., 2016). The network takes uniformly sampled T frames {X1...XT } and predicts the un-normalized class score:
P (X1, ..., XT ; Θ) = G (F(X1; Θ),F(X2; Θ), ...,F(XT ; Θ)) (1)
where F(·; Θ) is the 2D-CNN with learnable parameters Θ. The consensus function G reduces the frame-level predictions to a final prediction. One common practice for G is the averaging operation. The major drawback is that this cannot capture the order of the frames. The network performs poorly on datasets that contain temporal-related labels (e.g. “turning left”, “moving forward”, etc). LSTM (Hochreiter & Schmidhuber, 1997) can also be used as G to get the final prediction (Donahue et al., 2015), but it cannot capture low-level features across the frames, as mentioned in Lin et al. (2019). A few works have been recently proposed to model temporal causality using a bag of features scheme in TRN (Zhou et al., 2018), channel shifting in TSM (Lin et al., 2019), depthwise convolutions in TAM (Fan et al., 2019). Different from these methods, in this work, we hypothesis that an inputdependent fusion of framewise features will be beneficial for temporal understanding and efficiency, as the amount of temporal information depends on the dynamics and the type of events in the video. Hence we propose adaptive temporal fusion for action recognition.
Adaptive Temporal Fusion. Consider a single 2D convolutional layer: yt = φ(Wx ∗xt+bx), where xt ∈ Rc×h×w denotes the input feature map at time step t with c channels and spatial dimension h × w, and yt ∈ Rc ′×h′×w′ is the output feature map. Wx ∈ Rc ′×k×k×c denotes the convolution filters (with kernel size k× k) and bx ∈ Rc ′
is the bias. We use “∗” for convolution operation. φ(·) is the combination of batchnorm and non-linear functions (e.g. ReLU (Nair & Hinton, 2010)).
We introduce a policy network consisting of two fully-connected layers and a ReLU function designed to adaptively select channels for keeping, reusing or skipping. As shown in Figure 1, at time t, we first generate feature vectors vt−1, vt ∈ Rc from history feature map xt−1 and current feature map xt via global average pooling. Then the policy network predicts:
pt = g(vt−1, vt; Θg) (2)
where pt ∈ {0, 1, 2}c ′
is a channel-wise policy (choosing “keep”, “reuse” or “skip”) to generate the output feature map: if pit = 0, the i-th channel of output feature map will be computed via the normal convolution; if pit = 1, it will reuse the i-th channel of the feature map yt−1 which has been already computed at time t − 1; otherwise, the i-th channel will be just padded with zeros. Formally, this output feature map can be written as ỹt = f(yt−1, yt, pt) where the i-th channel is:
ỹit = 1 [ pit = 0 ] · yit + 1 [ pit = 1 ] · yit−1 (3)
here 1 [·] is the indicator function. In Figure 1, the policy network instructs the convolution layer to only compute the first and fourth channels, reuses the second channel of the history feature and skips the third channel. Features from varied time steps are adaptively fused along the channel dimension.
Adaptive temporal fusion enables the 2D convolution to capture temporal information: its temporal perceptive field grows linearly to the depth of the layers, as more features from different time steps are fused when going deeper in the network. Our novel design can be seen as a general methodology for many state-of-the-art 2D-CNN approaches: if we discard "skip" and use a predefined fixed policy, then it becomes the online temporal fusion in Lin et al. (2019). If the policy only chooses from "skip" and "keep", then it becomes dynamic pruning methods (Gao et al., 2018; Hua et al., 2019). Our design is a generalized approach taking both temporal modelling and efficiency into consideration.
Complexity Analysis. To illustrate the efficiency of our framework, we compute the floating point operations (FLOPS), which is a hardware-independent metric and widely used in the field of efficient action recognition1(Wu et al., 2019b; Gao et al., 2020; Meng et al., 2020; Fan et al., 2019). To compute saving from layers before and after the policy network, we add another convolution after ỹt with kernel Wy ∈ Rc ′′×k′×k′×c′ and bias by ∈ Rc ′′
. The total FLOPS for each convolution will be:{ mx = c
′ · h′ · w′ · (k · k · c+ 1) my = c ′′ · h′′ · w′′ · (k′ · k′ · c′ + 1) (4)
When the policy is applied, only those output channels used in time t or going to be reused in time t+ 1 need to be computed in the first convolution layer, and only the channels not skipped in time t count for input feature maps for the second convolution layer. Hence the overall FLOPS is:
M = T−1∑ τ=0
[ 1
c′ c′−1∑ i=0
Keep at τ or resue at τ + 1︷ ︸︸ ︷ 1 [ piτ · (piτ+1 − 1) = 0
] ·mx︸ ︷︷ ︸
FLOPS from the first conv at time τ
+ (1− 1 c′ c′−1∑ i=0 Skip at τ︷ ︸︸ ︷ 1(piτ = 2)) ·my︸ ︷︷ ︸
FLOPS from the second conv at time τ
] (5)
Thus when the policy network skips more channels or reuses channels that are already computed in the previous time step, the FLOPS for those two convolution layers can be reduced proportionally.
Loss functions. We take the average of framewise predictions as the video prediction and minimize:
L = ∑
(x,y)∼Dtrain
[ −y log(P (x)) + λ ·
B−1∑ i=0 Mi
] (6)
1Latency is another important measure for efficiency, which can be reduced via CUDA optimization for sparse convolution (Verelst & Tuytelaars, 2019). We leave it for future research.
The first term is the cross entropy between one-hot encoded ground truth labels y and predictions P (x). The second term is the FLOPS measure for all the B temporal fusion blocks in the network. In this way, our network is learned to achieve both accuracy and efficiency at a trade-off controlled by λ.
Discrete policies for “keep”, “reuse” or “skip” shown in Eq. 3 and Eq. 5 make L non-differentiable hence hard to optimize. One common practice is to use a score function estimator (e.g. REINFORCE (Glynn, 1990; Williams, 1992)) to avoid backpropagating through categorical samplings, but the high variance of the estimator makes the training slow to converge (Wu et al., 2019a; Jang et al., 2016). As an alternative, we use Gumbel-Softmax Estimator to enable efficient end-to-end optimization.
Training using Gumbel Softmax Estimator. Specifically, the policy network first generates a logit q ∈ R3 for each channel in the output feature map and then we use Softmax to derive a normalized categorical distribution: π = {ri|ri = exp(qi)exp (q0)+exp (q1)+exp (q2)}. With the Gumbel-Max trick, discrete samples from the distribution π can be drawn as (Jang et al., 2016): r̂ = argmaxi(log ri+Gi), where Gi = − log(− logUi) is a standard Gumbel distribution with i.i.d. Ui sampled from a uniform distribution Unif(0, 1). Since the argmax operator is not differentiable, the Gumbel Softmax distribution is used as a continuous approximation. In forward pass we represent the discrete sample r̂ as a one-hot encoded vector and in back-propagation we relax it to a real-valued vector R = {R0, R1, R2} via Softmax as follows:
Ri = exp ((log ri +Gi)/τ)∑2 j=1 exp ((log rj +Gj)/τ)
(7)
where τ is a temperature factor controlling the “smooothness” of the distribution: lim τ→∞ R converges to a uniform distribution and lim
τ→0 R becomes a one-hot vector. We set τ = 0.67 during the training.
Network Architectures and Notations. Our adaptive temporal fusion module can be easily plugged into any existing 2D-CNN models. Specifically, we focus on BN-Inception (Ioffe & Szegedy, 2015), ResNet (He et al., 2016) and EfficientNet (Tan & Le, 2019). For Bn-Inception, we add a policy network between every two consecutive Inception modules. For ResNet/EfficientNet, we insert the policy network between the first and the second convolution layers in each “residual block"/“inverted residual block". We denote our model as AdaFuseMethodBackbone, where the “Backbone” is chosen from {“R18”(ResNet18), “R50”(ResNet50), “Inc”(BN-Inception), “Eff”(EfficientNet)}, and the “Method” can be {“TSN”, “TSM”, “TSM+Last”}. More details can be found in the following section.
4 EXPERIMENTS
We first show AdaFuse can significantly improve the accuracy and efficiency of ResNet18, BNInception and EfficientNet, outperforming other baselines by a large margin on Something-V1. Then on all datasets, AdaFuse with ResNet18 / ResNet50 can consistently outperform corresponding base models. We further propose two instantiations using AdaFuse on TSM (Lin et al., 2019) to compare with state-of-the-art approaches on Something V1 & V2: AdaFuseTSMR50 can save over 40% FLOPS at a comparable classification score under same amount of computation budget, AdaFuseTSM+LastR50 outperforms state-of-the-art methods in accuracy. Finally, we perform comprehensive ablation studies and quantitative analysis to verify the effectiveness of our adaptive temporal fusion.
Datasets. We evaluate AdaFuse on Something-Something V1 (Goyal et al., 2017) & V2 (Mahdisoltani et al., 2018), Jester (Materzynska et al., 2019) and a subset of Kinetics (Kay et al., 2017). Something V1 (98k videos) & V2 (194k videos) are two large-scale datasets sharing 174 human action labels (e.g. pretend to pick something up). Jester (Materzynska et al., 2019) has 27 annotated classes for hand gestures, with 119k / 15k videos in training / validation set. Mini-Kinetics (assembled by Meng et al. (2020)) is a subset of full Kinetics dataset (Kay et al., 2017) containing 121k videos for training and 10k videos for testing across 200 action classes.
Implementation details. To make a fair comparison, we carefully follow the training procedure in Lin et al. (2019). We uniformly sample T = 8 frames from each video. The input dimension for the network is 224× 224. Random scaling and cropping are used as data augmentation during training (and we further adopt random flipping for Mini-Kinetics). Center cropping is used during inference. All our networks are using ImageNet pretrained weights. We follow a step-wise learning rate scheduler with the initial learning rate as 0.002 and decay by 0.1 at epochs 20 & 40. To train
our adaptive temporal fusion approach, we set the efficiency term λ = 0.1. We train all the models for 50 epochs with a batch-size of 64, where each experiment takes 12∼ 24 hours on 4 Tesla V100 GPUs. We report the number of parameters used in each method, and measure the averaged FLOPS and Top1/Top5 accuracy for all the samples from each testing dataset.
Adaptive Temporal Fusion improves 2D CNN Performance. On Something V1 dataset, we show AdaFuse ’s improvement upon 2D CNNs by comparing with several baselines as follows:
• TSN (Wang et al., 2016): Simply average frame-level predictions as the video-level prediction. • CGNet (Hua et al., 2019): A dynamic pruning method to reduce computation cost for CNNs. • Threshold: We keep a fixed portion of channels base on their activation L1 norms and skip the
channels in smaller norms. It serves as a baseline for efficient recognition. • RANDOM: We use temporal fusion with a randomly sampled policy (instead of using learned
policy distribution). The distribution is chosen to match the FLOPS of adaptive methods. • LSTM: Update per-frame predictions by hidden states in LSTM and averages all predictions as
the video-level prediction.
We implement all the methods using publicly available code and apply adaptive temporal fusion in TSN using ResNet18, BN-Inception and EfficientNet backbones, denoting them as AdaFuseTSNR18 AdaFuseTSNInc and AdaFuseTSNEff-x respectively (“x” stands for different scales of the EfficientNet backbones). As shown in Table 1, AdaFuseTSNR18 uses the similar FLOPS as those efficient methods (“CGNet” and “Threshold”) but has a great improvement in classification accuracy Specifically, AdaFuseTSNR18 and AdaFuseTSNInc outperform corresponding TSN models by more than 20% in Top-1 accuracy, while using only 74% of FLOPS. Interestingly, comparing to TSN, even temporal fusion with a random policy can achieve an absolute gain of 12.7% in accuracy, which shows that temporal fusion can greatly improve the action recognition performance of 2D CNNs. Additionally equipped with the adaptive policy, AdaFuseTSNR18 can get 9.4% extra improvement in classification. LSTM is the most competitive baseline in terms of accuracy, while AdaFuseTSNR18 has an absolute gain of 8.5% in accuracy and uses only 70% of FLOPS. When using a more efficient architecture as shown in Table.2, our approach can still reduce 10% of the FLOPS while improving the accuracy by a large margin. To further validate AdaFuse being model-agnostic and robust, we conduct extensive experiments using ResNet18 and
ResNet50 backbones on Something V1 & V2, Jester and Mini-Kinetics. As shown in Table 3, AdaFuseTSNR18 and AdaFuseTSNR50 consistently outperform their baseline TSN and LSTM models with a 35% saving in FLOPS on average. Our approach harvests large gains in accuracy and efficiency on temporal-rich datasets like Something V1 & V2 and Jester. When comes to Mini-Kinetics, AdaFuse can still achieve a better accuracy with 20%∼33% computation reduction. Comparison with Adaptive Inference Method. We compare our approach with AR-Net (Meng et al., 2020), which adaptively chooses frame resolutions for efficient inference. As shown in Table 4, on Something V1, Jester and Mini-Kinetics, we achieve a better accuracy-efficiency trade-off than AR-Net while using 40% less parameters. On temporal-rich dataset like Something-V1, our approach attains the largest improvement, which shows AdaFuseTSNR50 ’s capability for strong temporal modelling.
Comparison with State-of-the-Art Methods. We apply adaptive temporal fusion with different backbones (ResNet50 (He et al., 2016), BN-Inception (Ioffe & Szegedy, 2015)) and designs (TSN (Wang et al., 2016), TSM (Lin et al., 2019)) and compare with State-of-the-Art methods on Something V1 & V2. As shown in Table 5, using BN-Inception as backbone, AdaFuseTSNInc is 4% better than “TRNMultiscale” (Zhou et al., 2018) in accuracy, using only 75% of the FLOPS. AdaFuseTSNR50 with ResNet50 can even outperform 3D CNN method “I3D” (Carreira & Zisserman, 2017) and hybrid 2D/3D CNN method “ECO” (Zolfaghari et al., 2018) with much less FLOPS.
As for adaptive temporal fusion on “TSM” (Lin et al., 2019), AdaFuseTSMR50 achieves more than 40% savings in computation but at 1% loss in accuracy (Table 5). We believe this is because TSM uses temporal shift operation, which can be seen as a variant of temporal fusion. Too much temporal fusion could cause performance degradation due to a worse spatial modelling capability. As a remedy, we just adopt adaptive temporal fusion in the last block in TSM to capture high-level semantics (more intuition can be found later in our visualization experiments) and denote it as AdaFuseTSM+LastR50 . On Something V1 & V2 datasets, AdaFuseTSM+LastR50 outperforms TSM and all other state-of-the-art methods in accuracy with a 5% saving in FLOPS comparing to TSM. From our experiments, we observe that the performance of adaptive temporal fusion depends on the position of shift modules in TSM and optimizing the position of such modules through additional regularization could help us not only to achieve better accuracy but also to lower the number of parameters. We leave this as an interesting future work.
We depict the accuracy, computation cost and model sizes in Figure 2. All the results are computed from Something V1 validation set. The graph shows GFLOPS / accuracy on x / y-axis and the diameter of each data point is proportional to the number of model parameters. AdaFuse (blue points) owns the best trade-off for accuracy and efficiency at a comparable model size to other 2D CNN approaches. Once again it shows AdaFuse is an effective and efficient design for action recognition.
Policy Visualizations. Figure 3 shows overall policy (“Skip”, “Reuse” and “Keep”) differences across all datasets. We focus on the quotient of “Reuse / Keep” as it indicates the mixture ratio for feature fusion. The quotients on Something V1&V2 and Jester datasets are very high (0.694, 0.741 and 0.574 respectively) when comparing to Mini-Kinetics (0.232). This is probably because the first three datasets contain more temporal relationship than Kinetics. Moreover, Jester has the highest percentage in skipping which indicates many actions in this dataset can be correctly recognized with few channels: Training on Jester is more biased towards optimizing for efficiency as the accuracy loss is very low. Distinctive policy patterns show different characteristics of datasets, which conveys a potential of our proposed approach to be served as a “dataset inspector”.
Figure 4 shows a more fine-grained policy distribution on Something V2. We plot the policy usage in each residual block inside the ResNet50 architecture (shown in light red/orange/blue) and use 3rd-order polynomials to estimate the trend of each policy (shown in black dash curves). To further study the time-sensitiveness of the policies, we calculate the number of channels where the policies stay unchanged across the frames in one video (shown in dark red/orange/blue). We find earlier layers tend to skip more and reuse/keep less, and vice versa. The first several convolution blocks normally capture low-level feature maps in large spatial sizes, so the “information density” on channel dimension should be less which results in more redundancy across channels. Later blocks often capture high-level semantics and the feature maps are smaller in spatial dimensions, so the “semantic density” could be higher and less channels will be skipped. In addition, low-level features change faster across the frames (shades, lighting intensity) whereas high-level semantics change slowly across the frames (e.g. "kicking soccer"), that’s why more features can be reused in later layers to avoid computing the same semantic again. As for the time-sensitiveness, earlier layers tend to be less sensitive and vice versa. We find that “reuse” is the most time-sensitive policy, as “Reuse (Instance)” ratio is very low, which again shows the functioning of adaptive temporal fusion. We believe these findings will provide insights to future designs of effective temporal fusions.
How does the adaptive policy affect the performance? We consider AdaFuseTSNR18 on Something V1 dataset and break down by using “skip”, ‘reuse” and adaptive (Ada.) policy learning. As shown
Table 6: Effect of different policies (using AdaFuseTSNR18 ) on Something V1 dataset.
Method Skip Reuse Ada. FLOPS Top1
TSN 8 8 8 14.6G 14.8 Ada. Skip 4 8 4 6.6G 9.5
Ada. Reuse 8 4 4 13.8G 36.3 Random 4 4 8 10.4G 27.5 AdaFuseTSNR18 4 4 4 10.3G 36.9
Table 7: Effect of hidden sizes and efficient weights on the performance of AdaFuseTSM+LastR50 on SthV2.
#Hidden Units λ #Params FLOPS Top1 Skip Reuse
1024 0.050 39.1M 31.53G 59.71 13% 14% 1024 0.075 39.1M 31.29G 59.75 15% 13% 1024 0.100 39.1M 31.04G 59.40 18% 12% 2048 0.100 54.3M 30.97G 59.96 21% 10% 4096 0.100 84.7M 31.04G 60.00 25% 8%
in Table 6, “Ada. Skip” saves 55% of FLOPS comparing to TSN but at a great degradation in accuracy. This shows naively skipping channels won’t give a better classification performance. “Ada. Reuse” approach brings 21.5% absolute gain in accuracy, which shows the importance of temporal fusion. However, it fails to save much FLOPS due to the absence of skipping operation. Combining “Keep” with both “Skip” and “Reuse” via just a random policy is already achieving a better trade-off comparing to TSN, and by using adaptive learning approach, AdaFuseTSNR18 reaches the highest accuracy with the second-best efficiency. In summary, the “Skip” operation contributes the most to the computation efficiency, the “Reuse” operation boosts the classification accuracy, while the adaptive policy adds the chemistry to the whole system and achieves the best performance.
How to achieve a better performance? Here we investigate different settings to improve the performance of AdaFuseTSM+LastR50 on Something V2 dataset. As shown in Table 7, increasing λ will obtain a better efficiency but might result in accuracy degradation. Enlarging the number of hidden units for the policy network can get a better overall performance: as we increase the size from 1024 to 4096, the accuracy keeps increasing. When the policy network grows larger, it learns to skip more to reduce computations and to reuse history features wisely for recognition. But notice that the model size grows almost linearly to hidden layer sizes, which leads to a considerable overhead to the FLOPS computation. As a compromise, we only choose λ = 0.75 and hidden size 1024 for AdaFuseTSM+LastR50 . We leave the design for a more advanced and delicate policy module for future works.
Runtime/Hardware. Sparse convolutional kernels are often less efficient on current hardwares, e.g., GPUs. However, we strongly believe that it is important to explore models for efficient video action recognition which might guide the direction of new hardware development in the years to come. Furthermore, we also expect wall-clock time speed-up in the inference stage via efficient CUDA implementation, which we anticipate will be developed.
5 CONCLUSIONS
We have shown the effectiveness of adaptive temporal fusion for efficient video recognition. Comprehensive experiments on four challenging and diverse datasets present a broad spectrum of accuracyefficiency models. Our approach is model-agnostic, which allows it to be served as a plugin operation for a wide range of architectures for video recognition tasks.
Acknowledgements. This work is supported by the Intelligence Advanced Research Projects Activity (IARPA) via DOI/IBC contract number D17PC00341. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. This work is also partly supported by the MIT-IBM Watson AI Lab.
Disclaimer. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DOI/IBC, or the U.S. Government. | 1. What is the focus of the paper regarding action recognition?
2. What are the strengths of the proposed approach, particularly its flexibility and efficiency?
3. Do you have any concerns about the paper's experiments or comparisons with other works?
4. How does the reviewer assess the clarity and relevance of the paper's content? | Review | Review
Authors assessed how their adaptive temporal fusion network performs on public datasets such as Something V1&2, Kinetics, etc.. The contribution of this paper is in proposing an approach to automatically determine which channels to keep, reuse, or skip per layer and per target instance that can result in efficient action recognition.
STRENGTHS: The proposed method is model-agnostic, making it easy to use as a plugin operation for other network architectures. Reusing history features when necessary to make the network capable for strong temporal modeling.
CONCERNS: The paper has examined the temporal fusion module on BN-Inception and ResNet models, while more recent models’ evaluation is missing. While the policy network is defined as two FC layers and a ReLU, it is not clear why the authors chose this architecture and how they have tuned it? In section 3, Using 2D-CNN for Action Recognition, a citation to one of the recent works in modeling the temporal causality is missing: Asghari-Esfeden, Sadjad, Mario Sznaier, and Octavia Camps. "Dynamic Motion Representation for Human Action Recognition." In The IEEE Winter Conference on Applications of Computer Vision, pp. 557-566. 2020. |
ICLR | Title
AdaFuse: Adaptive Temporal Fusion Network for Efficient Action Recognition
Abstract
Temporal modelling is the key for efficient video action recognition. While understanding temporal information can improve recognition accuracy for dynamic actions, removing temporal redundancy and reusing past features can significantly save computation leading to efficient action recognition. In this paper, we introduce an adaptive temporal fusion network, called AdaFuse, that dynamically fuses channels from current and past feature maps for strong temporal modelling. Specifically, the necessary information from the historical convolution feature maps is fused with current pruned feature maps with the goal of improving both recognition accuracy and efficiency. In addition, we use a skipping operation to further reduce the computation cost of action recognition. Extensive experiments on Something V1&V2, Jester and Mini-Kinetics show that our approach can achieve about 40% computation savings with comparable accuracy to state-of-the-art methods. The project page can be found at https://mengyuest.github.io/AdaFuse/
1 INTRODUCTION
Over the last few years, video action recognition has made rapid progress with the introduction of a number of large-scale video datasets (Carreira & Zisserman, 2017; Monfort et al., 2018; Goyal et al., 2017). Despite impressive results on commonly used benchmark datasets, efficiency remains a great challenge for many resource constrained applications due to the heavy computational burden of deep Convolutional Neural Network (CNN) models.
Motivated by the need of efficiency, extensive studies have been recently conducted that focus on either designing new lightweight architectures (e.g., R(2+1)D (Tran et al., 2018), S3D (Xie et al., 2018), channel-separated CNNs (Tran et al., 2019)) or selecting salient frames/clips conditioned on the input (Yeung et al., 2016; Wu et al., 2019b; Korbar et al., 2019; Gao et al., 2020). However, most of the existing approaches do not consider the fact that there exists redundancy in CNN features which can significantly save computation leading to more efficient action recognition. In particular, orthogonal to the design of compact models, the computational cost of a CNN model also has much to do with the redundancy of CNN features (Han et al., 2019). Furthermore, the amount of redundancy depends on the dynamics and type of events in the video: A set of still frames for a simple action (e.g. “Sleeping”) will have a higher redundancy comparing to a fast-changed action with rich interaction and deformation (e.g. “Pulling two ends of something so that it gets stretched”). Thus, based on the input we could compute just a subset of features, while the rest of the channels can reuse history feature maps or even be skipped without losing any accuracy, resulting in large computational savings compared to computing all the features at a given CNN layer. Based on this intuition, we present a new perspective for efficient action recognition by adaptively deciding what channels to compute or reuse, on a per instance basis, for recognizing complex actions.
In this paper, we propose AdaFuse, an adaptive temporal fusion network that learns a decision policy to dynamically fuse channels from current and history feature maps for efficient action recognition. Specifically, our approach reuses history features when necessary (i.e., dynamically decides which channels to keep, reuse or skip per layer and per instance) with the goal of improving both recognition
∗Email: [email protected]. This work was done while Yue was an AI Resident at IBM Research.
accuracy and efficiency. As these decisions are discrete and non-differentiable, we rely on a Gumbel Softmax sampling approach (Jang et al., 2016) to learn the policy jointly with the network parameters through standard back-propagation, without resorting to complex reinforcement learning as in (Wu et al., 2019b; Fan et al., 2018; Yeung et al., 2016). We design the loss to achieve both competitive performance and resource efficiency required for action recognition. Extensive experiments on multiple benchmarks show that AdaFuse significantly reduces the computation without accuracy loss.
The main contributions of our work are as follows:
• We propose a novel approach that automatically determines which channels to keep, reuse or skip per layer and per target instance for efficient action recognition.
• Our approach is model-agnostic, which allows this to be served as a plugin operation for a wide range of 2D CNN-based action recognition architectures.
• The overall policy distribution can be seen as an indicator for the dataset characteristic, and the block-level distribution can bring potential guidance for future architecture designs.
• We conduct extensive experiments on four benchmark datasets (Something-Something V1 (Goyal et al., 2017), Something-Something V2 (Mahdisoltani et al., 2018), Jester (Materzynska et al., 2019) and Mini-Kinetics (Kay et al., 2017)) to demonstrate the superiority of our proposed approach over state-of-the-art methods.
2 RELATED WORK
Action Recognition. Much progress has been made in developing a variety of ways to recognize complex actions, by either applying 2D-CNNs (Karpathy et al., 2014; Wang et al., 2016; Fan et al., 2019) or 3D-CNNs (Tran et al., 2015; Carreira & Zisserman, 2017; Hara et al., 2018). Most successful architectures are usually based on the two-stream model (Simonyan & Zisserman, 2014), processing RGB frames and optical-flow in two separate CNNs with a late fusion in the upper layers (Karpathy et al., 2014) or further combining with other modalities (Asghari-Esfeden et al., 2020; Li et al., 2020a). Another popular approach for CNN-based action recognition is the use of 2D-CNN to extract frame-level features and then model the temporal causality using different aggregation modules such as temporal averaging in TSN (Wang et al., 2016), a bag of features scheme in TRN (Zhou et al., 2018), channel shifting in TSM (Lin et al., 2019), depthwise convolutions in TAM (Fan et al., 2019), non-local neural networks (Wang et al., 2018a), temporal enhancement and interaction module in TEINet (Liu et al., 2020), and LSTMs (Donahue et al., 2015). Many variants of 3D-CNNs such as C3D (Tran et al., 2015; Ji et al., 2013), I3D (Carreira & Zisserman, 2017) and ResNet3D (Hara et al., 2018), that use 3D convolutions to model space and time jointly, have also been introduced for action recognition. SlowFast (Feichtenhofer et al., 2018) employs two pathways to capture temporal information by processing a video at both slow and fast frame rates. Recently, STM (Jiang et al., 2019) proposes new channel-wise convolutional blocks to jointly capture spatio-temporal and motion information in consecutive frames. TEA (Li et al., 2020b) introduces a motion excitation module including multiple temporal aggregation modules to capture both short- and long-range temporal evolution in videos. Gate-Shift networks (Sudhakaran et al., 2020) use spatial gating for spatial-temporal decomposition of 3D kernels in Inception-based architectures.
While extensive studies have been conducted in the last few years, limited efforts have been made towards efficient action recognition (Wu et al., 2019b;a; Gao et al., 2020). Specifically, methods for efficient recognition focus on either designing new lightweight architectures that aim to reduce the complexity by decomposing the 3D convolution into 2D spatial convolution and 1D temporal convolution (e.g., R(2+1)D (Tran et al., 2018), S3D (Xie et al., 2018), channel-separated CNNs (Tran et al., 2019)) or selecting salient frames/clips conditioned on the input (Yeung et al., 2016; Wu et al., 2019b; Korbar et al., 2019; Gao et al., 2020). Our approach is most related to the latter which focuses on conditional computation and is agnostic to the network architecture used for recognizing actions. However, instead of focusing on data sampling, our approach dynamically fuses channels from current and history feature maps to reduce the computation. Furthermore, as feature maps can be redundant or noisy, we use a skipping operation to make it more efficient for action recognition.
Conditional Computation. Many conditional computation methods have been recently proposed with the goal of improving computational efficiency (Bengio et al., 2015; 2013; Veit & Belongie, 2018; Wang et al., 2018b; Graves, 2016; Meng et al., 2020; Pan et al., 2021). Several works have been
proposed that add decision branches to different layers of CNNs to learn whether to exit the network for faster inference (Figurnov et al., 2017; McGill & Perona, 2017; Wu et al., 2020). BlockDrop (Wu et al., 2018) effectively reduces the inference time by learning to dynamically select which layers to execute per sample during inference. SpotTune (Guo et al., 2019) learns to adaptively route information through finetuned or pre-trained layers. Conditionally parameterized convolutions (Yang et al., 2019) or dynamic convolutions (Chen et al., 2019a; Verelst & Tuytelaars, 2019) have also been proposed to learn specialized convolutional kernels for each example to improve efficiency in image recognition. Our method is also related to recent works on dynamic channel pruning (Gao et al., 2018; Lin et al., 2017) that generate decisions to skip the computation for a subset of output channels. While GaterNet (Chen et al., 2019b) proposes a separate gating network to learn channel-wise binary gates for the backbone network, Channel gating network (Hua et al., 2019) identifies regions in the features that contribute less to the classification result, and skips the computation on a subset of the input channels for these ineffective regions. In contrast to the prior works that focus on only dropping unimportant channels, our proposed approach also reuses history features when necessary to make the network capable for strong temporal modelling.
3 METHODOLOGY
In this section, we first show the general approach using 2D-CNN for action recognition. Then we present the concept of adaptive temporal fusion and analyze its computation cost. Finally, we describe the end-to-end optimization and network specifications.
Using 2D-CNN for Action Recognition. One popular solution is to first generate frame-wise predictions and then utilize a consensus operation to get the final prediction (Wang et al., 2016). The network takes uniformly sampled T frames {X1...XT } and predicts the un-normalized class score:
P (X1, ..., XT ; Θ) = G (F(X1; Θ),F(X2; Θ), ...,F(XT ; Θ)) (1)
where F(·; Θ) is the 2D-CNN with learnable parameters Θ. The consensus function G reduces the frame-level predictions to a final prediction. One common practice for G is the averaging operation. The major drawback is that this cannot capture the order of the frames. The network performs poorly on datasets that contain temporal-related labels (e.g. “turning left”, “moving forward”, etc). LSTM (Hochreiter & Schmidhuber, 1997) can also be used as G to get the final prediction (Donahue et al., 2015), but it cannot capture low-level features across the frames, as mentioned in Lin et al. (2019). A few works have been recently proposed to model temporal causality using a bag of features scheme in TRN (Zhou et al., 2018), channel shifting in TSM (Lin et al., 2019), depthwise convolutions in TAM (Fan et al., 2019). Different from these methods, in this work, we hypothesis that an inputdependent fusion of framewise features will be beneficial for temporal understanding and efficiency, as the amount of temporal information depends on the dynamics and the type of events in the video. Hence we propose adaptive temporal fusion for action recognition.
Adaptive Temporal Fusion. Consider a single 2D convolutional layer: yt = φ(Wx ∗xt+bx), where xt ∈ Rc×h×w denotes the input feature map at time step t with c channels and spatial dimension h × w, and yt ∈ Rc ′×h′×w′ is the output feature map. Wx ∈ Rc ′×k×k×c denotes the convolution filters (with kernel size k× k) and bx ∈ Rc ′
is the bias. We use “∗” for convolution operation. φ(·) is the combination of batchnorm and non-linear functions (e.g. ReLU (Nair & Hinton, 2010)).
We introduce a policy network consisting of two fully-connected layers and a ReLU function designed to adaptively select channels for keeping, reusing or skipping. As shown in Figure 1, at time t, we first generate feature vectors vt−1, vt ∈ Rc from history feature map xt−1 and current feature map xt via global average pooling. Then the policy network predicts:
pt = g(vt−1, vt; Θg) (2)
where pt ∈ {0, 1, 2}c ′
is a channel-wise policy (choosing “keep”, “reuse” or “skip”) to generate the output feature map: if pit = 0, the i-th channel of output feature map will be computed via the normal convolution; if pit = 1, it will reuse the i-th channel of the feature map yt−1 which has been already computed at time t − 1; otherwise, the i-th channel will be just padded with zeros. Formally, this output feature map can be written as ỹt = f(yt−1, yt, pt) where the i-th channel is:
ỹit = 1 [ pit = 0 ] · yit + 1 [ pit = 1 ] · yit−1 (3)
here 1 [·] is the indicator function. In Figure 1, the policy network instructs the convolution layer to only compute the first and fourth channels, reuses the second channel of the history feature and skips the third channel. Features from varied time steps are adaptively fused along the channel dimension.
Adaptive temporal fusion enables the 2D convolution to capture temporal information: its temporal perceptive field grows linearly to the depth of the layers, as more features from different time steps are fused when going deeper in the network. Our novel design can be seen as a general methodology for many state-of-the-art 2D-CNN approaches: if we discard "skip" and use a predefined fixed policy, then it becomes the online temporal fusion in Lin et al. (2019). If the policy only chooses from "skip" and "keep", then it becomes dynamic pruning methods (Gao et al., 2018; Hua et al., 2019). Our design is a generalized approach taking both temporal modelling and efficiency into consideration.
Complexity Analysis. To illustrate the efficiency of our framework, we compute the floating point operations (FLOPS), which is a hardware-independent metric and widely used in the field of efficient action recognition1(Wu et al., 2019b; Gao et al., 2020; Meng et al., 2020; Fan et al., 2019). To compute saving from layers before and after the policy network, we add another convolution after ỹt with kernel Wy ∈ Rc ′′×k′×k′×c′ and bias by ∈ Rc ′′
. The total FLOPS for each convolution will be:{ mx = c
′ · h′ · w′ · (k · k · c+ 1) my = c ′′ · h′′ · w′′ · (k′ · k′ · c′ + 1) (4)
When the policy is applied, only those output channels used in time t or going to be reused in time t+ 1 need to be computed in the first convolution layer, and only the channels not skipped in time t count for input feature maps for the second convolution layer. Hence the overall FLOPS is:
M = T−1∑ τ=0
[ 1
c′ c′−1∑ i=0
Keep at τ or resue at τ + 1︷ ︸︸ ︷ 1 [ piτ · (piτ+1 − 1) = 0
] ·mx︸ ︷︷ ︸
FLOPS from the first conv at time τ
+ (1− 1 c′ c′−1∑ i=0 Skip at τ︷ ︸︸ ︷ 1(piτ = 2)) ·my︸ ︷︷ ︸
FLOPS from the second conv at time τ
] (5)
Thus when the policy network skips more channels or reuses channels that are already computed in the previous time step, the FLOPS for those two convolution layers can be reduced proportionally.
Loss functions. We take the average of framewise predictions as the video prediction and minimize:
L = ∑
(x,y)∼Dtrain
[ −y log(P (x)) + λ ·
B−1∑ i=0 Mi
] (6)
1Latency is another important measure for efficiency, which can be reduced via CUDA optimization for sparse convolution (Verelst & Tuytelaars, 2019). We leave it for future research.
The first term is the cross entropy between one-hot encoded ground truth labels y and predictions P (x). The second term is the FLOPS measure for all the B temporal fusion blocks in the network. In this way, our network is learned to achieve both accuracy and efficiency at a trade-off controlled by λ.
Discrete policies for “keep”, “reuse” or “skip” shown in Eq. 3 and Eq. 5 make L non-differentiable hence hard to optimize. One common practice is to use a score function estimator (e.g. REINFORCE (Glynn, 1990; Williams, 1992)) to avoid backpropagating through categorical samplings, but the high variance of the estimator makes the training slow to converge (Wu et al., 2019a; Jang et al., 2016). As an alternative, we use Gumbel-Softmax Estimator to enable efficient end-to-end optimization.
Training using Gumbel Softmax Estimator. Specifically, the policy network first generates a logit q ∈ R3 for each channel in the output feature map and then we use Softmax to derive a normalized categorical distribution: π = {ri|ri = exp(qi)exp (q0)+exp (q1)+exp (q2)}. With the Gumbel-Max trick, discrete samples from the distribution π can be drawn as (Jang et al., 2016): r̂ = argmaxi(log ri+Gi), where Gi = − log(− logUi) is a standard Gumbel distribution with i.i.d. Ui sampled from a uniform distribution Unif(0, 1). Since the argmax operator is not differentiable, the Gumbel Softmax distribution is used as a continuous approximation. In forward pass we represent the discrete sample r̂ as a one-hot encoded vector and in back-propagation we relax it to a real-valued vector R = {R0, R1, R2} via Softmax as follows:
Ri = exp ((log ri +Gi)/τ)∑2 j=1 exp ((log rj +Gj)/τ)
(7)
where τ is a temperature factor controlling the “smooothness” of the distribution: lim τ→∞ R converges to a uniform distribution and lim
τ→0 R becomes a one-hot vector. We set τ = 0.67 during the training.
Network Architectures and Notations. Our adaptive temporal fusion module can be easily plugged into any existing 2D-CNN models. Specifically, we focus on BN-Inception (Ioffe & Szegedy, 2015), ResNet (He et al., 2016) and EfficientNet (Tan & Le, 2019). For Bn-Inception, we add a policy network between every two consecutive Inception modules. For ResNet/EfficientNet, we insert the policy network between the first and the second convolution layers in each “residual block"/“inverted residual block". We denote our model as AdaFuseMethodBackbone, where the “Backbone” is chosen from {“R18”(ResNet18), “R50”(ResNet50), “Inc”(BN-Inception), “Eff”(EfficientNet)}, and the “Method” can be {“TSN”, “TSM”, “TSM+Last”}. More details can be found in the following section.
4 EXPERIMENTS
We first show AdaFuse can significantly improve the accuracy and efficiency of ResNet18, BNInception and EfficientNet, outperforming other baselines by a large margin on Something-V1. Then on all datasets, AdaFuse with ResNet18 / ResNet50 can consistently outperform corresponding base models. We further propose two instantiations using AdaFuse on TSM (Lin et al., 2019) to compare with state-of-the-art approaches on Something V1 & V2: AdaFuseTSMR50 can save over 40% FLOPS at a comparable classification score under same amount of computation budget, AdaFuseTSM+LastR50 outperforms state-of-the-art methods in accuracy. Finally, we perform comprehensive ablation studies and quantitative analysis to verify the effectiveness of our adaptive temporal fusion.
Datasets. We evaluate AdaFuse on Something-Something V1 (Goyal et al., 2017) & V2 (Mahdisoltani et al., 2018), Jester (Materzynska et al., 2019) and a subset of Kinetics (Kay et al., 2017). Something V1 (98k videos) & V2 (194k videos) are two large-scale datasets sharing 174 human action labels (e.g. pretend to pick something up). Jester (Materzynska et al., 2019) has 27 annotated classes for hand gestures, with 119k / 15k videos in training / validation set. Mini-Kinetics (assembled by Meng et al. (2020)) is a subset of full Kinetics dataset (Kay et al., 2017) containing 121k videos for training and 10k videos for testing across 200 action classes.
Implementation details. To make a fair comparison, we carefully follow the training procedure in Lin et al. (2019). We uniformly sample T = 8 frames from each video. The input dimension for the network is 224× 224. Random scaling and cropping are used as data augmentation during training (and we further adopt random flipping for Mini-Kinetics). Center cropping is used during inference. All our networks are using ImageNet pretrained weights. We follow a step-wise learning rate scheduler with the initial learning rate as 0.002 and decay by 0.1 at epochs 20 & 40. To train
our adaptive temporal fusion approach, we set the efficiency term λ = 0.1. We train all the models for 50 epochs with a batch-size of 64, where each experiment takes 12∼ 24 hours on 4 Tesla V100 GPUs. We report the number of parameters used in each method, and measure the averaged FLOPS and Top1/Top5 accuracy for all the samples from each testing dataset.
Adaptive Temporal Fusion improves 2D CNN Performance. On Something V1 dataset, we show AdaFuse ’s improvement upon 2D CNNs by comparing with several baselines as follows:
• TSN (Wang et al., 2016): Simply average frame-level predictions as the video-level prediction. • CGNet (Hua et al., 2019): A dynamic pruning method to reduce computation cost for CNNs. • Threshold: We keep a fixed portion of channels base on their activation L1 norms and skip the
channels in smaller norms. It serves as a baseline for efficient recognition. • RANDOM: We use temporal fusion with a randomly sampled policy (instead of using learned
policy distribution). The distribution is chosen to match the FLOPS of adaptive methods. • LSTM: Update per-frame predictions by hidden states in LSTM and averages all predictions as
the video-level prediction.
We implement all the methods using publicly available code and apply adaptive temporal fusion in TSN using ResNet18, BN-Inception and EfficientNet backbones, denoting them as AdaFuseTSNR18 AdaFuseTSNInc and AdaFuseTSNEff-x respectively (“x” stands for different scales of the EfficientNet backbones). As shown in Table 1, AdaFuseTSNR18 uses the similar FLOPS as those efficient methods (“CGNet” and “Threshold”) but has a great improvement in classification accuracy Specifically, AdaFuseTSNR18 and AdaFuseTSNInc outperform corresponding TSN models by more than 20% in Top-1 accuracy, while using only 74% of FLOPS. Interestingly, comparing to TSN, even temporal fusion with a random policy can achieve an absolute gain of 12.7% in accuracy, which shows that temporal fusion can greatly improve the action recognition performance of 2D CNNs. Additionally equipped with the adaptive policy, AdaFuseTSNR18 can get 9.4% extra improvement in classification. LSTM is the most competitive baseline in terms of accuracy, while AdaFuseTSNR18 has an absolute gain of 8.5% in accuracy and uses only 70% of FLOPS. When using a more efficient architecture as shown in Table.2, our approach can still reduce 10% of the FLOPS while improving the accuracy by a large margin. To further validate AdaFuse being model-agnostic and robust, we conduct extensive experiments using ResNet18 and
ResNet50 backbones on Something V1 & V2, Jester and Mini-Kinetics. As shown in Table 3, AdaFuseTSNR18 and AdaFuseTSNR50 consistently outperform their baseline TSN and LSTM models with a 35% saving in FLOPS on average. Our approach harvests large gains in accuracy and efficiency on temporal-rich datasets like Something V1 & V2 and Jester. When comes to Mini-Kinetics, AdaFuse can still achieve a better accuracy with 20%∼33% computation reduction. Comparison with Adaptive Inference Method. We compare our approach with AR-Net (Meng et al., 2020), which adaptively chooses frame resolutions for efficient inference. As shown in Table 4, on Something V1, Jester and Mini-Kinetics, we achieve a better accuracy-efficiency trade-off than AR-Net while using 40% less parameters. On temporal-rich dataset like Something-V1, our approach attains the largest improvement, which shows AdaFuseTSNR50 ’s capability for strong temporal modelling.
Comparison with State-of-the-Art Methods. We apply adaptive temporal fusion with different backbones (ResNet50 (He et al., 2016), BN-Inception (Ioffe & Szegedy, 2015)) and designs (TSN (Wang et al., 2016), TSM (Lin et al., 2019)) and compare with State-of-the-Art methods on Something V1 & V2. As shown in Table 5, using BN-Inception as backbone, AdaFuseTSNInc is 4% better than “TRNMultiscale” (Zhou et al., 2018) in accuracy, using only 75% of the FLOPS. AdaFuseTSNR50 with ResNet50 can even outperform 3D CNN method “I3D” (Carreira & Zisserman, 2017) and hybrid 2D/3D CNN method “ECO” (Zolfaghari et al., 2018) with much less FLOPS.
As for adaptive temporal fusion on “TSM” (Lin et al., 2019), AdaFuseTSMR50 achieves more than 40% savings in computation but at 1% loss in accuracy (Table 5). We believe this is because TSM uses temporal shift operation, which can be seen as a variant of temporal fusion. Too much temporal fusion could cause performance degradation due to a worse spatial modelling capability. As a remedy, we just adopt adaptive temporal fusion in the last block in TSM to capture high-level semantics (more intuition can be found later in our visualization experiments) and denote it as AdaFuseTSM+LastR50 . On Something V1 & V2 datasets, AdaFuseTSM+LastR50 outperforms TSM and all other state-of-the-art methods in accuracy with a 5% saving in FLOPS comparing to TSM. From our experiments, we observe that the performance of adaptive temporal fusion depends on the position of shift modules in TSM and optimizing the position of such modules through additional regularization could help us not only to achieve better accuracy but also to lower the number of parameters. We leave this as an interesting future work.
We depict the accuracy, computation cost and model sizes in Figure 2. All the results are computed from Something V1 validation set. The graph shows GFLOPS / accuracy on x / y-axis and the diameter of each data point is proportional to the number of model parameters. AdaFuse (blue points) owns the best trade-off for accuracy and efficiency at a comparable model size to other 2D CNN approaches. Once again it shows AdaFuse is an effective and efficient design for action recognition.
Policy Visualizations. Figure 3 shows overall policy (“Skip”, “Reuse” and “Keep”) differences across all datasets. We focus on the quotient of “Reuse / Keep” as it indicates the mixture ratio for feature fusion. The quotients on Something V1&V2 and Jester datasets are very high (0.694, 0.741 and 0.574 respectively) when comparing to Mini-Kinetics (0.232). This is probably because the first three datasets contain more temporal relationship than Kinetics. Moreover, Jester has the highest percentage in skipping which indicates many actions in this dataset can be correctly recognized with few channels: Training on Jester is more biased towards optimizing for efficiency as the accuracy loss is very low. Distinctive policy patterns show different characteristics of datasets, which conveys a potential of our proposed approach to be served as a “dataset inspector”.
Figure 4 shows a more fine-grained policy distribution on Something V2. We plot the policy usage in each residual block inside the ResNet50 architecture (shown in light red/orange/blue) and use 3rd-order polynomials to estimate the trend of each policy (shown in black dash curves). To further study the time-sensitiveness of the policies, we calculate the number of channels where the policies stay unchanged across the frames in one video (shown in dark red/orange/blue). We find earlier layers tend to skip more and reuse/keep less, and vice versa. The first several convolution blocks normally capture low-level feature maps in large spatial sizes, so the “information density” on channel dimension should be less which results in more redundancy across channels. Later blocks often capture high-level semantics and the feature maps are smaller in spatial dimensions, so the “semantic density” could be higher and less channels will be skipped. In addition, low-level features change faster across the frames (shades, lighting intensity) whereas high-level semantics change slowly across the frames (e.g. "kicking soccer"), that’s why more features can be reused in later layers to avoid computing the same semantic again. As for the time-sensitiveness, earlier layers tend to be less sensitive and vice versa. We find that “reuse” is the most time-sensitive policy, as “Reuse (Instance)” ratio is very low, which again shows the functioning of adaptive temporal fusion. We believe these findings will provide insights to future designs of effective temporal fusions.
How does the adaptive policy affect the performance? We consider AdaFuseTSNR18 on Something V1 dataset and break down by using “skip”, ‘reuse” and adaptive (Ada.) policy learning. As shown
Table 6: Effect of different policies (using AdaFuseTSNR18 ) on Something V1 dataset.
Method Skip Reuse Ada. FLOPS Top1
TSN 8 8 8 14.6G 14.8 Ada. Skip 4 8 4 6.6G 9.5
Ada. Reuse 8 4 4 13.8G 36.3 Random 4 4 8 10.4G 27.5 AdaFuseTSNR18 4 4 4 10.3G 36.9
Table 7: Effect of hidden sizes and efficient weights on the performance of AdaFuseTSM+LastR50 on SthV2.
#Hidden Units λ #Params FLOPS Top1 Skip Reuse
1024 0.050 39.1M 31.53G 59.71 13% 14% 1024 0.075 39.1M 31.29G 59.75 15% 13% 1024 0.100 39.1M 31.04G 59.40 18% 12% 2048 0.100 54.3M 30.97G 59.96 21% 10% 4096 0.100 84.7M 31.04G 60.00 25% 8%
in Table 6, “Ada. Skip” saves 55% of FLOPS comparing to TSN but at a great degradation in accuracy. This shows naively skipping channels won’t give a better classification performance. “Ada. Reuse” approach brings 21.5% absolute gain in accuracy, which shows the importance of temporal fusion. However, it fails to save much FLOPS due to the absence of skipping operation. Combining “Keep” with both “Skip” and “Reuse” via just a random policy is already achieving a better trade-off comparing to TSN, and by using adaptive learning approach, AdaFuseTSNR18 reaches the highest accuracy with the second-best efficiency. In summary, the “Skip” operation contributes the most to the computation efficiency, the “Reuse” operation boosts the classification accuracy, while the adaptive policy adds the chemistry to the whole system and achieves the best performance.
How to achieve a better performance? Here we investigate different settings to improve the performance of AdaFuseTSM+LastR50 on Something V2 dataset. As shown in Table 7, increasing λ will obtain a better efficiency but might result in accuracy degradation. Enlarging the number of hidden units for the policy network can get a better overall performance: as we increase the size from 1024 to 4096, the accuracy keeps increasing. When the policy network grows larger, it learns to skip more to reduce computations and to reuse history features wisely for recognition. But notice that the model size grows almost linearly to hidden layer sizes, which leads to a considerable overhead to the FLOPS computation. As a compromise, we only choose λ = 0.75 and hidden size 1024 for AdaFuseTSM+LastR50 . We leave the design for a more advanced and delicate policy module for future works.
Runtime/Hardware. Sparse convolutional kernels are often less efficient on current hardwares, e.g., GPUs. However, we strongly believe that it is important to explore models for efficient video action recognition which might guide the direction of new hardware development in the years to come. Furthermore, we also expect wall-clock time speed-up in the inference stage via efficient CUDA implementation, which we anticipate will be developed.
5 CONCLUSIONS
We have shown the effectiveness of adaptive temporal fusion for efficient video recognition. Comprehensive experiments on four challenging and diverse datasets present a broad spectrum of accuracyefficiency models. Our approach is model-agnostic, which allows it to be served as a plugin operation for a wide range of architectures for video recognition tasks.
Acknowledgements. This work is supported by the Intelligence Advanced Research Projects Activity (IARPA) via DOI/IBC contract number D17PC00341. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. This work is also partly supported by the MIT-IBM Watson AI Lab.
Disclaimer. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DOI/IBC, or the U.S. Government. | 1. What is the focus of the paper regarding adaptive inference models for efficient action recognition in videos?
2. What are the strengths of the proposed model, particularly in its novel idea and experimental results?
3. What are the limitations of the paper, especially regarding its technical novelty and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any minor comments or suggestions provided by the reviewer regarding the paper's content or presentation? | Review | Review
#################################
Summary:
The paper presented an adaptive inference model for efficient action recognition in videos. The core of the model is the dynamic gating of feature channels that controls the fusion between two frame features, whereby the gating is conditioned on the input video and helps to reduce the computational cost at runtime. The proposed model was evaluated on several video action datasets and compared against a number of existing deep models. The results demonstrated a good efficiency-accuracy trade-off for the proposed model.
#################################
Pros:
The paper has a novel idea (adaptive temporal feature fusion) and addresses an important problem in vision (efficient action recognition).
Solid experiments on multiple datasets. The analysis of the learned policy is quite interesting.
Well-written paper
#################################
Cons:
Limited technical novelty
The idea of building adaptive inference models with a policy network for video classification has been previously explored by Wu et al., Meng et al. and others (e.g., skip part of the model, select a subset of frames, choose the input resolution to the model). The main technical component of the model is also very similar to the channel gating network (Hua et al.). The key innovation seems to be the perspective of modeling temporal feature fusion for adaptive inference. This is probably best considered as in parallel to previous approaches for adaptive video recognition. The technical components thus look less exciting.
Lack of comparison to other adaptive inference models / temporal fusion schemes
There isn’t a real comparison between the proposed method and recent works on adaptive inference video recognition (e.g, Wu et al, Meng et al.). The benefit of model temporal feature fusion --- a main contribution of the paper, thus remains unclear with respect to other design choices (e.g., input resolution or frame selection). I’d suggest some experiments that compare to those work. Another important experiment is to contrast the proposed method with other temporal feature fusion schemes (e.g, LSTM, TSM). For example, TSM --- a hand-crafted feature fusion module, seems to have less number of parameters, slightly higher FLOPs and comparable accuracy (Table 3). If that is the case, the contribution of the proposed adaptive fusion scheme is much weakened.
#################################
Minor comments:
It is not totally clear to me how the FLOPs of the proposed model are computed. As the proposed model will have a different FLOP conditioned on the input video, were the reported FLOPs averaged across the dataset? I was not able to find a description in the paper.
It will be great if the authors can report some run-time performance (e.g., wall time). To achieve the theoretic FLOPs, the proposed model will rely on filter re-arrangement on the fly and sparse convolution kernels. Both can be less efficient on certain devices, e.g., GPUs.
#################################
Justification for score:
All in all a good paper. My main concern is the missing link / comparison to previous works on adaptive video recognition. If this concern can be addressed, I am happy to raise my rating. |
ICLR | Title
AdaFuse: Adaptive Temporal Fusion Network for Efficient Action Recognition
Abstract
Temporal modelling is the key for efficient video action recognition. While understanding temporal information can improve recognition accuracy for dynamic actions, removing temporal redundancy and reusing past features can significantly save computation leading to efficient action recognition. In this paper, we introduce an adaptive temporal fusion network, called AdaFuse, that dynamically fuses channels from current and past feature maps for strong temporal modelling. Specifically, the necessary information from the historical convolution feature maps is fused with current pruned feature maps with the goal of improving both recognition accuracy and efficiency. In addition, we use a skipping operation to further reduce the computation cost of action recognition. Extensive experiments on Something V1&V2, Jester and Mini-Kinetics show that our approach can achieve about 40% computation savings with comparable accuracy to state-of-the-art methods. The project page can be found at https://mengyuest.github.io/AdaFuse/
1 INTRODUCTION
Over the last few years, video action recognition has made rapid progress with the introduction of a number of large-scale video datasets (Carreira & Zisserman, 2017; Monfort et al., 2018; Goyal et al., 2017). Despite impressive results on commonly used benchmark datasets, efficiency remains a great challenge for many resource constrained applications due to the heavy computational burden of deep Convolutional Neural Network (CNN) models.
Motivated by the need of efficiency, extensive studies have been recently conducted that focus on either designing new lightweight architectures (e.g., R(2+1)D (Tran et al., 2018), S3D (Xie et al., 2018), channel-separated CNNs (Tran et al., 2019)) or selecting salient frames/clips conditioned on the input (Yeung et al., 2016; Wu et al., 2019b; Korbar et al., 2019; Gao et al., 2020). However, most of the existing approaches do not consider the fact that there exists redundancy in CNN features which can significantly save computation leading to more efficient action recognition. In particular, orthogonal to the design of compact models, the computational cost of a CNN model also has much to do with the redundancy of CNN features (Han et al., 2019). Furthermore, the amount of redundancy depends on the dynamics and type of events in the video: A set of still frames for a simple action (e.g. “Sleeping”) will have a higher redundancy comparing to a fast-changed action with rich interaction and deformation (e.g. “Pulling two ends of something so that it gets stretched”). Thus, based on the input we could compute just a subset of features, while the rest of the channels can reuse history feature maps or even be skipped without losing any accuracy, resulting in large computational savings compared to computing all the features at a given CNN layer. Based on this intuition, we present a new perspective for efficient action recognition by adaptively deciding what channels to compute or reuse, on a per instance basis, for recognizing complex actions.
In this paper, we propose AdaFuse, an adaptive temporal fusion network that learns a decision policy to dynamically fuse channels from current and history feature maps for efficient action recognition. Specifically, our approach reuses history features when necessary (i.e., dynamically decides which channels to keep, reuse or skip per layer and per instance) with the goal of improving both recognition
∗Email: [email protected]. This work was done while Yue was an AI Resident at IBM Research.
accuracy and efficiency. As these decisions are discrete and non-differentiable, we rely on a Gumbel Softmax sampling approach (Jang et al., 2016) to learn the policy jointly with the network parameters through standard back-propagation, without resorting to complex reinforcement learning as in (Wu et al., 2019b; Fan et al., 2018; Yeung et al., 2016). We design the loss to achieve both competitive performance and resource efficiency required for action recognition. Extensive experiments on multiple benchmarks show that AdaFuse significantly reduces the computation without accuracy loss.
The main contributions of our work are as follows:
• We propose a novel approach that automatically determines which channels to keep, reuse or skip per layer and per target instance for efficient action recognition.
• Our approach is model-agnostic, which allows this to be served as a plugin operation for a wide range of 2D CNN-based action recognition architectures.
• The overall policy distribution can be seen as an indicator for the dataset characteristic, and the block-level distribution can bring potential guidance for future architecture designs.
• We conduct extensive experiments on four benchmark datasets (Something-Something V1 (Goyal et al., 2017), Something-Something V2 (Mahdisoltani et al., 2018), Jester (Materzynska et al., 2019) and Mini-Kinetics (Kay et al., 2017)) to demonstrate the superiority of our proposed approach over state-of-the-art methods.
2 RELATED WORK
Action Recognition. Much progress has been made in developing a variety of ways to recognize complex actions, by either applying 2D-CNNs (Karpathy et al., 2014; Wang et al., 2016; Fan et al., 2019) or 3D-CNNs (Tran et al., 2015; Carreira & Zisserman, 2017; Hara et al., 2018). Most successful architectures are usually based on the two-stream model (Simonyan & Zisserman, 2014), processing RGB frames and optical-flow in two separate CNNs with a late fusion in the upper layers (Karpathy et al., 2014) or further combining with other modalities (Asghari-Esfeden et al., 2020; Li et al., 2020a). Another popular approach for CNN-based action recognition is the use of 2D-CNN to extract frame-level features and then model the temporal causality using different aggregation modules such as temporal averaging in TSN (Wang et al., 2016), a bag of features scheme in TRN (Zhou et al., 2018), channel shifting in TSM (Lin et al., 2019), depthwise convolutions in TAM (Fan et al., 2019), non-local neural networks (Wang et al., 2018a), temporal enhancement and interaction module in TEINet (Liu et al., 2020), and LSTMs (Donahue et al., 2015). Many variants of 3D-CNNs such as C3D (Tran et al., 2015; Ji et al., 2013), I3D (Carreira & Zisserman, 2017) and ResNet3D (Hara et al., 2018), that use 3D convolutions to model space and time jointly, have also been introduced for action recognition. SlowFast (Feichtenhofer et al., 2018) employs two pathways to capture temporal information by processing a video at both slow and fast frame rates. Recently, STM (Jiang et al., 2019) proposes new channel-wise convolutional blocks to jointly capture spatio-temporal and motion information in consecutive frames. TEA (Li et al., 2020b) introduces a motion excitation module including multiple temporal aggregation modules to capture both short- and long-range temporal evolution in videos. Gate-Shift networks (Sudhakaran et al., 2020) use spatial gating for spatial-temporal decomposition of 3D kernels in Inception-based architectures.
While extensive studies have been conducted in the last few years, limited efforts have been made towards efficient action recognition (Wu et al., 2019b;a; Gao et al., 2020). Specifically, methods for efficient recognition focus on either designing new lightweight architectures that aim to reduce the complexity by decomposing the 3D convolution into 2D spatial convolution and 1D temporal convolution (e.g., R(2+1)D (Tran et al., 2018), S3D (Xie et al., 2018), channel-separated CNNs (Tran et al., 2019)) or selecting salient frames/clips conditioned on the input (Yeung et al., 2016; Wu et al., 2019b; Korbar et al., 2019; Gao et al., 2020). Our approach is most related to the latter which focuses on conditional computation and is agnostic to the network architecture used for recognizing actions. However, instead of focusing on data sampling, our approach dynamically fuses channels from current and history feature maps to reduce the computation. Furthermore, as feature maps can be redundant or noisy, we use a skipping operation to make it more efficient for action recognition.
Conditional Computation. Many conditional computation methods have been recently proposed with the goal of improving computational efficiency (Bengio et al., 2015; 2013; Veit & Belongie, 2018; Wang et al., 2018b; Graves, 2016; Meng et al., 2020; Pan et al., 2021). Several works have been
proposed that add decision branches to different layers of CNNs to learn whether to exit the network for faster inference (Figurnov et al., 2017; McGill & Perona, 2017; Wu et al., 2020). BlockDrop (Wu et al., 2018) effectively reduces the inference time by learning to dynamically select which layers to execute per sample during inference. SpotTune (Guo et al., 2019) learns to adaptively route information through finetuned or pre-trained layers. Conditionally parameterized convolutions (Yang et al., 2019) or dynamic convolutions (Chen et al., 2019a; Verelst & Tuytelaars, 2019) have also been proposed to learn specialized convolutional kernels for each example to improve efficiency in image recognition. Our method is also related to recent works on dynamic channel pruning (Gao et al., 2018; Lin et al., 2017) that generate decisions to skip the computation for a subset of output channels. While GaterNet (Chen et al., 2019b) proposes a separate gating network to learn channel-wise binary gates for the backbone network, Channel gating network (Hua et al., 2019) identifies regions in the features that contribute less to the classification result, and skips the computation on a subset of the input channels for these ineffective regions. In contrast to the prior works that focus on only dropping unimportant channels, our proposed approach also reuses history features when necessary to make the network capable for strong temporal modelling.
3 METHODOLOGY
In this section, we first show the general approach using 2D-CNN for action recognition. Then we present the concept of adaptive temporal fusion and analyze its computation cost. Finally, we describe the end-to-end optimization and network specifications.
Using 2D-CNN for Action Recognition. One popular solution is to first generate frame-wise predictions and then utilize a consensus operation to get the final prediction (Wang et al., 2016). The network takes uniformly sampled T frames {X1...XT } and predicts the un-normalized class score:
P (X1, ..., XT ; Θ) = G (F(X1; Θ),F(X2; Θ), ...,F(XT ; Θ)) (1)
where F(·; Θ) is the 2D-CNN with learnable parameters Θ. The consensus function G reduces the frame-level predictions to a final prediction. One common practice for G is the averaging operation. The major drawback is that this cannot capture the order of the frames. The network performs poorly on datasets that contain temporal-related labels (e.g. “turning left”, “moving forward”, etc). LSTM (Hochreiter & Schmidhuber, 1997) can also be used as G to get the final prediction (Donahue et al., 2015), but it cannot capture low-level features across the frames, as mentioned in Lin et al. (2019). A few works have been recently proposed to model temporal causality using a bag of features scheme in TRN (Zhou et al., 2018), channel shifting in TSM (Lin et al., 2019), depthwise convolutions in TAM (Fan et al., 2019). Different from these methods, in this work, we hypothesis that an inputdependent fusion of framewise features will be beneficial for temporal understanding and efficiency, as the amount of temporal information depends on the dynamics and the type of events in the video. Hence we propose adaptive temporal fusion for action recognition.
Adaptive Temporal Fusion. Consider a single 2D convolutional layer: yt = φ(Wx ∗xt+bx), where xt ∈ Rc×h×w denotes the input feature map at time step t with c channels and spatial dimension h × w, and yt ∈ Rc ′×h′×w′ is the output feature map. Wx ∈ Rc ′×k×k×c denotes the convolution filters (with kernel size k× k) and bx ∈ Rc ′
is the bias. We use “∗” for convolution operation. φ(·) is the combination of batchnorm and non-linear functions (e.g. ReLU (Nair & Hinton, 2010)).
We introduce a policy network consisting of two fully-connected layers and a ReLU function designed to adaptively select channels for keeping, reusing or skipping. As shown in Figure 1, at time t, we first generate feature vectors vt−1, vt ∈ Rc from history feature map xt−1 and current feature map xt via global average pooling. Then the policy network predicts:
pt = g(vt−1, vt; Θg) (2)
where pt ∈ {0, 1, 2}c ′
is a channel-wise policy (choosing “keep”, “reuse” or “skip”) to generate the output feature map: if pit = 0, the i-th channel of output feature map will be computed via the normal convolution; if pit = 1, it will reuse the i-th channel of the feature map yt−1 which has been already computed at time t − 1; otherwise, the i-th channel will be just padded with zeros. Formally, this output feature map can be written as ỹt = f(yt−1, yt, pt) where the i-th channel is:
ỹit = 1 [ pit = 0 ] · yit + 1 [ pit = 1 ] · yit−1 (3)
here 1 [·] is the indicator function. In Figure 1, the policy network instructs the convolution layer to only compute the first and fourth channels, reuses the second channel of the history feature and skips the third channel. Features from varied time steps are adaptively fused along the channel dimension.
Adaptive temporal fusion enables the 2D convolution to capture temporal information: its temporal perceptive field grows linearly to the depth of the layers, as more features from different time steps are fused when going deeper in the network. Our novel design can be seen as a general methodology for many state-of-the-art 2D-CNN approaches: if we discard "skip" and use a predefined fixed policy, then it becomes the online temporal fusion in Lin et al. (2019). If the policy only chooses from "skip" and "keep", then it becomes dynamic pruning methods (Gao et al., 2018; Hua et al., 2019). Our design is a generalized approach taking both temporal modelling and efficiency into consideration.
Complexity Analysis. To illustrate the efficiency of our framework, we compute the floating point operations (FLOPS), which is a hardware-independent metric and widely used in the field of efficient action recognition1(Wu et al., 2019b; Gao et al., 2020; Meng et al., 2020; Fan et al., 2019). To compute saving from layers before and after the policy network, we add another convolution after ỹt with kernel Wy ∈ Rc ′′×k′×k′×c′ and bias by ∈ Rc ′′
. The total FLOPS for each convolution will be:{ mx = c
′ · h′ · w′ · (k · k · c+ 1) my = c ′′ · h′′ · w′′ · (k′ · k′ · c′ + 1) (4)
When the policy is applied, only those output channels used in time t or going to be reused in time t+ 1 need to be computed in the first convolution layer, and only the channels not skipped in time t count for input feature maps for the second convolution layer. Hence the overall FLOPS is:
M = T−1∑ τ=0
[ 1
c′ c′−1∑ i=0
Keep at τ or resue at τ + 1︷ ︸︸ ︷ 1 [ piτ · (piτ+1 − 1) = 0
] ·mx︸ ︷︷ ︸
FLOPS from the first conv at time τ
+ (1− 1 c′ c′−1∑ i=0 Skip at τ︷ ︸︸ ︷ 1(piτ = 2)) ·my︸ ︷︷ ︸
FLOPS from the second conv at time τ
] (5)
Thus when the policy network skips more channels or reuses channels that are already computed in the previous time step, the FLOPS for those two convolution layers can be reduced proportionally.
Loss functions. We take the average of framewise predictions as the video prediction and minimize:
L = ∑
(x,y)∼Dtrain
[ −y log(P (x)) + λ ·
B−1∑ i=0 Mi
] (6)
1Latency is another important measure for efficiency, which can be reduced via CUDA optimization for sparse convolution (Verelst & Tuytelaars, 2019). We leave it for future research.
The first term is the cross entropy between one-hot encoded ground truth labels y and predictions P (x). The second term is the FLOPS measure for all the B temporal fusion blocks in the network. In this way, our network is learned to achieve both accuracy and efficiency at a trade-off controlled by λ.
Discrete policies for “keep”, “reuse” or “skip” shown in Eq. 3 and Eq. 5 make L non-differentiable hence hard to optimize. One common practice is to use a score function estimator (e.g. REINFORCE (Glynn, 1990; Williams, 1992)) to avoid backpropagating through categorical samplings, but the high variance of the estimator makes the training slow to converge (Wu et al., 2019a; Jang et al., 2016). As an alternative, we use Gumbel-Softmax Estimator to enable efficient end-to-end optimization.
Training using Gumbel Softmax Estimator. Specifically, the policy network first generates a logit q ∈ R3 for each channel in the output feature map and then we use Softmax to derive a normalized categorical distribution: π = {ri|ri = exp(qi)exp (q0)+exp (q1)+exp (q2)}. With the Gumbel-Max trick, discrete samples from the distribution π can be drawn as (Jang et al., 2016): r̂ = argmaxi(log ri+Gi), where Gi = − log(− logUi) is a standard Gumbel distribution with i.i.d. Ui sampled from a uniform distribution Unif(0, 1). Since the argmax operator is not differentiable, the Gumbel Softmax distribution is used as a continuous approximation. In forward pass we represent the discrete sample r̂ as a one-hot encoded vector and in back-propagation we relax it to a real-valued vector R = {R0, R1, R2} via Softmax as follows:
Ri = exp ((log ri +Gi)/τ)∑2 j=1 exp ((log rj +Gj)/τ)
(7)
where τ is a temperature factor controlling the “smooothness” of the distribution: lim τ→∞ R converges to a uniform distribution and lim
τ→0 R becomes a one-hot vector. We set τ = 0.67 during the training.
Network Architectures and Notations. Our adaptive temporal fusion module can be easily plugged into any existing 2D-CNN models. Specifically, we focus on BN-Inception (Ioffe & Szegedy, 2015), ResNet (He et al., 2016) and EfficientNet (Tan & Le, 2019). For Bn-Inception, we add a policy network between every two consecutive Inception modules. For ResNet/EfficientNet, we insert the policy network between the first and the second convolution layers in each “residual block"/“inverted residual block". We denote our model as AdaFuseMethodBackbone, where the “Backbone” is chosen from {“R18”(ResNet18), “R50”(ResNet50), “Inc”(BN-Inception), “Eff”(EfficientNet)}, and the “Method” can be {“TSN”, “TSM”, “TSM+Last”}. More details can be found in the following section.
4 EXPERIMENTS
We first show AdaFuse can significantly improve the accuracy and efficiency of ResNet18, BNInception and EfficientNet, outperforming other baselines by a large margin on Something-V1. Then on all datasets, AdaFuse with ResNet18 / ResNet50 can consistently outperform corresponding base models. We further propose two instantiations using AdaFuse on TSM (Lin et al., 2019) to compare with state-of-the-art approaches on Something V1 & V2: AdaFuseTSMR50 can save over 40% FLOPS at a comparable classification score under same amount of computation budget, AdaFuseTSM+LastR50 outperforms state-of-the-art methods in accuracy. Finally, we perform comprehensive ablation studies and quantitative analysis to verify the effectiveness of our adaptive temporal fusion.
Datasets. We evaluate AdaFuse on Something-Something V1 (Goyal et al., 2017) & V2 (Mahdisoltani et al., 2018), Jester (Materzynska et al., 2019) and a subset of Kinetics (Kay et al., 2017). Something V1 (98k videos) & V2 (194k videos) are two large-scale datasets sharing 174 human action labels (e.g. pretend to pick something up). Jester (Materzynska et al., 2019) has 27 annotated classes for hand gestures, with 119k / 15k videos in training / validation set. Mini-Kinetics (assembled by Meng et al. (2020)) is a subset of full Kinetics dataset (Kay et al., 2017) containing 121k videos for training and 10k videos for testing across 200 action classes.
Implementation details. To make a fair comparison, we carefully follow the training procedure in Lin et al. (2019). We uniformly sample T = 8 frames from each video. The input dimension for the network is 224× 224. Random scaling and cropping are used as data augmentation during training (and we further adopt random flipping for Mini-Kinetics). Center cropping is used during inference. All our networks are using ImageNet pretrained weights. We follow a step-wise learning rate scheduler with the initial learning rate as 0.002 and decay by 0.1 at epochs 20 & 40. To train
our adaptive temporal fusion approach, we set the efficiency term λ = 0.1. We train all the models for 50 epochs with a batch-size of 64, where each experiment takes 12∼ 24 hours on 4 Tesla V100 GPUs. We report the number of parameters used in each method, and measure the averaged FLOPS and Top1/Top5 accuracy for all the samples from each testing dataset.
Adaptive Temporal Fusion improves 2D CNN Performance. On Something V1 dataset, we show AdaFuse ’s improvement upon 2D CNNs by comparing with several baselines as follows:
• TSN (Wang et al., 2016): Simply average frame-level predictions as the video-level prediction. • CGNet (Hua et al., 2019): A dynamic pruning method to reduce computation cost for CNNs. • Threshold: We keep a fixed portion of channels base on their activation L1 norms and skip the
channels in smaller norms. It serves as a baseline for efficient recognition. • RANDOM: We use temporal fusion with a randomly sampled policy (instead of using learned
policy distribution). The distribution is chosen to match the FLOPS of adaptive methods. • LSTM: Update per-frame predictions by hidden states in LSTM and averages all predictions as
the video-level prediction.
We implement all the methods using publicly available code and apply adaptive temporal fusion in TSN using ResNet18, BN-Inception and EfficientNet backbones, denoting them as AdaFuseTSNR18 AdaFuseTSNInc and AdaFuseTSNEff-x respectively (“x” stands for different scales of the EfficientNet backbones). As shown in Table 1, AdaFuseTSNR18 uses the similar FLOPS as those efficient methods (“CGNet” and “Threshold”) but has a great improvement in classification accuracy Specifically, AdaFuseTSNR18 and AdaFuseTSNInc outperform corresponding TSN models by more than 20% in Top-1 accuracy, while using only 74% of FLOPS. Interestingly, comparing to TSN, even temporal fusion with a random policy can achieve an absolute gain of 12.7% in accuracy, which shows that temporal fusion can greatly improve the action recognition performance of 2D CNNs. Additionally equipped with the adaptive policy, AdaFuseTSNR18 can get 9.4% extra improvement in classification. LSTM is the most competitive baseline in terms of accuracy, while AdaFuseTSNR18 has an absolute gain of 8.5% in accuracy and uses only 70% of FLOPS. When using a more efficient architecture as shown in Table.2, our approach can still reduce 10% of the FLOPS while improving the accuracy by a large margin. To further validate AdaFuse being model-agnostic and robust, we conduct extensive experiments using ResNet18 and
ResNet50 backbones on Something V1 & V2, Jester and Mini-Kinetics. As shown in Table 3, AdaFuseTSNR18 and AdaFuseTSNR50 consistently outperform their baseline TSN and LSTM models with a 35% saving in FLOPS on average. Our approach harvests large gains in accuracy and efficiency on temporal-rich datasets like Something V1 & V2 and Jester. When comes to Mini-Kinetics, AdaFuse can still achieve a better accuracy with 20%∼33% computation reduction. Comparison with Adaptive Inference Method. We compare our approach with AR-Net (Meng et al., 2020), which adaptively chooses frame resolutions for efficient inference. As shown in Table 4, on Something V1, Jester and Mini-Kinetics, we achieve a better accuracy-efficiency trade-off than AR-Net while using 40% less parameters. On temporal-rich dataset like Something-V1, our approach attains the largest improvement, which shows AdaFuseTSNR50 ’s capability for strong temporal modelling.
Comparison with State-of-the-Art Methods. We apply adaptive temporal fusion with different backbones (ResNet50 (He et al., 2016), BN-Inception (Ioffe & Szegedy, 2015)) and designs (TSN (Wang et al., 2016), TSM (Lin et al., 2019)) and compare with State-of-the-Art methods on Something V1 & V2. As shown in Table 5, using BN-Inception as backbone, AdaFuseTSNInc is 4% better than “TRNMultiscale” (Zhou et al., 2018) in accuracy, using only 75% of the FLOPS. AdaFuseTSNR50 with ResNet50 can even outperform 3D CNN method “I3D” (Carreira & Zisserman, 2017) and hybrid 2D/3D CNN method “ECO” (Zolfaghari et al., 2018) with much less FLOPS.
As for adaptive temporal fusion on “TSM” (Lin et al., 2019), AdaFuseTSMR50 achieves more than 40% savings in computation but at 1% loss in accuracy (Table 5). We believe this is because TSM uses temporal shift operation, which can be seen as a variant of temporal fusion. Too much temporal fusion could cause performance degradation due to a worse spatial modelling capability. As a remedy, we just adopt adaptive temporal fusion in the last block in TSM to capture high-level semantics (more intuition can be found later in our visualization experiments) and denote it as AdaFuseTSM+LastR50 . On Something V1 & V2 datasets, AdaFuseTSM+LastR50 outperforms TSM and all other state-of-the-art methods in accuracy with a 5% saving in FLOPS comparing to TSM. From our experiments, we observe that the performance of adaptive temporal fusion depends on the position of shift modules in TSM and optimizing the position of such modules through additional regularization could help us not only to achieve better accuracy but also to lower the number of parameters. We leave this as an interesting future work.
We depict the accuracy, computation cost and model sizes in Figure 2. All the results are computed from Something V1 validation set. The graph shows GFLOPS / accuracy on x / y-axis and the diameter of each data point is proportional to the number of model parameters. AdaFuse (blue points) owns the best trade-off for accuracy and efficiency at a comparable model size to other 2D CNN approaches. Once again it shows AdaFuse is an effective and efficient design for action recognition.
Policy Visualizations. Figure 3 shows overall policy (“Skip”, “Reuse” and “Keep”) differences across all datasets. We focus on the quotient of “Reuse / Keep” as it indicates the mixture ratio for feature fusion. The quotients on Something V1&V2 and Jester datasets are very high (0.694, 0.741 and 0.574 respectively) when comparing to Mini-Kinetics (0.232). This is probably because the first three datasets contain more temporal relationship than Kinetics. Moreover, Jester has the highest percentage in skipping which indicates many actions in this dataset can be correctly recognized with few channels: Training on Jester is more biased towards optimizing for efficiency as the accuracy loss is very low. Distinctive policy patterns show different characteristics of datasets, which conveys a potential of our proposed approach to be served as a “dataset inspector”.
Figure 4 shows a more fine-grained policy distribution on Something V2. We plot the policy usage in each residual block inside the ResNet50 architecture (shown in light red/orange/blue) and use 3rd-order polynomials to estimate the trend of each policy (shown in black dash curves). To further study the time-sensitiveness of the policies, we calculate the number of channels where the policies stay unchanged across the frames in one video (shown in dark red/orange/blue). We find earlier layers tend to skip more and reuse/keep less, and vice versa. The first several convolution blocks normally capture low-level feature maps in large spatial sizes, so the “information density” on channel dimension should be less which results in more redundancy across channels. Later blocks often capture high-level semantics and the feature maps are smaller in spatial dimensions, so the “semantic density” could be higher and less channels will be skipped. In addition, low-level features change faster across the frames (shades, lighting intensity) whereas high-level semantics change slowly across the frames (e.g. "kicking soccer"), that’s why more features can be reused in later layers to avoid computing the same semantic again. As for the time-sensitiveness, earlier layers tend to be less sensitive and vice versa. We find that “reuse” is the most time-sensitive policy, as “Reuse (Instance)” ratio is very low, which again shows the functioning of adaptive temporal fusion. We believe these findings will provide insights to future designs of effective temporal fusions.
How does the adaptive policy affect the performance? We consider AdaFuseTSNR18 on Something V1 dataset and break down by using “skip”, ‘reuse” and adaptive (Ada.) policy learning. As shown
Table 6: Effect of different policies (using AdaFuseTSNR18 ) on Something V1 dataset.
Method Skip Reuse Ada. FLOPS Top1
TSN 8 8 8 14.6G 14.8 Ada. Skip 4 8 4 6.6G 9.5
Ada. Reuse 8 4 4 13.8G 36.3 Random 4 4 8 10.4G 27.5 AdaFuseTSNR18 4 4 4 10.3G 36.9
Table 7: Effect of hidden sizes and efficient weights on the performance of AdaFuseTSM+LastR50 on SthV2.
#Hidden Units λ #Params FLOPS Top1 Skip Reuse
1024 0.050 39.1M 31.53G 59.71 13% 14% 1024 0.075 39.1M 31.29G 59.75 15% 13% 1024 0.100 39.1M 31.04G 59.40 18% 12% 2048 0.100 54.3M 30.97G 59.96 21% 10% 4096 0.100 84.7M 31.04G 60.00 25% 8%
in Table 6, “Ada. Skip” saves 55% of FLOPS comparing to TSN but at a great degradation in accuracy. This shows naively skipping channels won’t give a better classification performance. “Ada. Reuse” approach brings 21.5% absolute gain in accuracy, which shows the importance of temporal fusion. However, it fails to save much FLOPS due to the absence of skipping operation. Combining “Keep” with both “Skip” and “Reuse” via just a random policy is already achieving a better trade-off comparing to TSN, and by using adaptive learning approach, AdaFuseTSNR18 reaches the highest accuracy with the second-best efficiency. In summary, the “Skip” operation contributes the most to the computation efficiency, the “Reuse” operation boosts the classification accuracy, while the adaptive policy adds the chemistry to the whole system and achieves the best performance.
How to achieve a better performance? Here we investigate different settings to improve the performance of AdaFuseTSM+LastR50 on Something V2 dataset. As shown in Table 7, increasing λ will obtain a better efficiency but might result in accuracy degradation. Enlarging the number of hidden units for the policy network can get a better overall performance: as we increase the size from 1024 to 4096, the accuracy keeps increasing. When the policy network grows larger, it learns to skip more to reduce computations and to reuse history features wisely for recognition. But notice that the model size grows almost linearly to hidden layer sizes, which leads to a considerable overhead to the FLOPS computation. As a compromise, we only choose λ = 0.75 and hidden size 1024 for AdaFuseTSM+LastR50 . We leave the design for a more advanced and delicate policy module for future works.
Runtime/Hardware. Sparse convolutional kernels are often less efficient on current hardwares, e.g., GPUs. However, we strongly believe that it is important to explore models for efficient video action recognition which might guide the direction of new hardware development in the years to come. Furthermore, we also expect wall-clock time speed-up in the inference stage via efficient CUDA implementation, which we anticipate will be developed.
5 CONCLUSIONS
We have shown the effectiveness of adaptive temporal fusion for efficient video recognition. Comprehensive experiments on four challenging and diverse datasets present a broad spectrum of accuracyefficiency models. Our approach is model-agnostic, which allows it to be served as a plugin operation for a wide range of architectures for video recognition tasks.
Acknowledgements. This work is supported by the Intelligence Advanced Research Projects Activity (IARPA) via DOI/IBC contract number D17PC00341. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. This work is also partly supported by the MIT-IBM Watson AI Lab.
Disclaimer. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DOI/IBC, or the U.S. Government. | 1. What are the strengths and weaknesses of the proposed AdaFuse method for action recognition?
2. How does the reviewer assess the novelty and effectiveness of the proposed approach compared to recent methods in the field?
3. Are there any concerns regarding the clarity and details of the paper's content, particularly in terms of the policy network and its output?
4. Does the reviewer have any suggestions for improving the proposed method, such as using a different base network or modifying the policy net?
5. Are there any other comments or questions that the reviewer has regarding the paper, such as the correctness of Figure 1 or the inclusion of more recent works in the comparison? | Review | Review
General
This paper proposes an adaptive temporal fusion network called AdaFuse for action recognition, which adaptively removes temporal redundancy and reuses past features for accuracy and efficiency. I listed the Pros and Cons I found in the paper below as well as some questions to clarify some of the details.
Pros
The idea of learning a decision policy to dynamically determine whether channel-wise features at time
t
are calculated normally, reused from
t
−
1
, or skipped, is interesting and reasonable.
The experimental results show that the proposed method achieves good accuracy with reasonable computational budget.
The ablation study in Table 4 reveals that the performance is greatly affected by the policy and it is important to fuse the futures from different frames to captures the temporal dependencies.
Cons
The propsoed method is not compared with some of the recent methods such as [1-3] ([4] is optional because the publication date is very close to the ICLR 2021 submission deadline). Especially for Jester and Mini-Kinetics dataset, the proposed method is compared with only TSN, which is old and weak as baseline as it does not incorporate the temporal information.
In Table 3, it seems that the proposed method achieves good accuracy, but I am afraid that it is just because of the strong base network, TSM. Merely adding AdaFuse to TSM indeed saves some computation but degrades the performance as described in the paper. The proposed remedy indeed slightly improves the accuracy but it requires much more parameters compared to the vanilla TSM. Overall, I find it benefitial to use the proposed method on top of simple base networks such as TSN, but the benefit of using the proposed method on top of strong base networks such as TSM may be marginal. Combined with the point 1 above, I am not well convinced of the effectiveness of the proposed method.
Some of the important details are not clear. I would appreciate if the authors could answer the questions I listed below.
Questions
Is it necessary to use Gumbel softmax? I think there are two kinds of tricks involved in Gumbel softmax. One is a trick for sampling from a categorical distribution, and the other is a trick for making the opperation differentiable. In my understanding, which may be wrong, the required characteristic for the present method is the latter one, and the sampling from the categorical distribution is not necessarily required. In this case, I think simply using
q
instead of
log
r
+
G
in equation (7) is enough.
Related to the point above, please clarify the type of output (hard or soft) of the policy net. The sentence after equation (2) says the output is integer values (0, 1, or 2), while the sentence before equation (7) says it is a real-valued vector.
Suppose
p
t
i
=
1
(reuse) and
p
t
−
1
i
=
1
(reuse again). In this case, is
y
t
i
copied from
y
t
−
2
i
? Or is the feature map of
i
-th channel at time
t
−
1
calculated on the fly for "reusing" at time
t
? In other words, if the policies for a channel is "reuse"
n
consecutive times, does the method take the feature from
n
frames before?
Other comments
Figure 1 may be incorrect or misleading. I think
p
t
, the output of the policy net, should go to the 2D Conv. block. Otherwise the block never knows which channel to compute at time
t
and which channel to reuse or skip.
[1] Sudhakaran+, Gate-Shift Networks for Video Action Recognition, CVPR 2020 [2] Martinez+, Action recognition with spatial-temporal discriminative filter banks, ICCV 2019 [3] Jiang+, STM: SpatioTemporal and Motion Encoding for Action RecognitionSTM: SpatioTemporal and Motion Encoding for Action Recognition, ICCV 2019 [4] Kwon+, MotionSqueeze: Neural Motion Feature Learning for Video Understanding, ECCV 2020 |
ICLR | Title
AdaFuse: Adaptive Temporal Fusion Network for Efficient Action Recognition
Abstract
Temporal modelling is the key for efficient video action recognition. While understanding temporal information can improve recognition accuracy for dynamic actions, removing temporal redundancy and reusing past features can significantly save computation leading to efficient action recognition. In this paper, we introduce an adaptive temporal fusion network, called AdaFuse, that dynamically fuses channels from current and past feature maps for strong temporal modelling. Specifically, the necessary information from the historical convolution feature maps is fused with current pruned feature maps with the goal of improving both recognition accuracy and efficiency. In addition, we use a skipping operation to further reduce the computation cost of action recognition. Extensive experiments on Something V1&V2, Jester and Mini-Kinetics show that our approach can achieve about 40% computation savings with comparable accuracy to state-of-the-art methods. The project page can be found at https://mengyuest.github.io/AdaFuse/
1 INTRODUCTION
Over the last few years, video action recognition has made rapid progress with the introduction of a number of large-scale video datasets (Carreira & Zisserman, 2017; Monfort et al., 2018; Goyal et al., 2017). Despite impressive results on commonly used benchmark datasets, efficiency remains a great challenge for many resource constrained applications due to the heavy computational burden of deep Convolutional Neural Network (CNN) models.
Motivated by the need of efficiency, extensive studies have been recently conducted that focus on either designing new lightweight architectures (e.g., R(2+1)D (Tran et al., 2018), S3D (Xie et al., 2018), channel-separated CNNs (Tran et al., 2019)) or selecting salient frames/clips conditioned on the input (Yeung et al., 2016; Wu et al., 2019b; Korbar et al., 2019; Gao et al., 2020). However, most of the existing approaches do not consider the fact that there exists redundancy in CNN features which can significantly save computation leading to more efficient action recognition. In particular, orthogonal to the design of compact models, the computational cost of a CNN model also has much to do with the redundancy of CNN features (Han et al., 2019). Furthermore, the amount of redundancy depends on the dynamics and type of events in the video: A set of still frames for a simple action (e.g. “Sleeping”) will have a higher redundancy comparing to a fast-changed action with rich interaction and deformation (e.g. “Pulling two ends of something so that it gets stretched”). Thus, based on the input we could compute just a subset of features, while the rest of the channels can reuse history feature maps or even be skipped without losing any accuracy, resulting in large computational savings compared to computing all the features at a given CNN layer. Based on this intuition, we present a new perspective for efficient action recognition by adaptively deciding what channels to compute or reuse, on a per instance basis, for recognizing complex actions.
In this paper, we propose AdaFuse, an adaptive temporal fusion network that learns a decision policy to dynamically fuse channels from current and history feature maps for efficient action recognition. Specifically, our approach reuses history features when necessary (i.e., dynamically decides which channels to keep, reuse or skip per layer and per instance) with the goal of improving both recognition
∗Email: [email protected]. This work was done while Yue was an AI Resident at IBM Research.
accuracy and efficiency. As these decisions are discrete and non-differentiable, we rely on a Gumbel Softmax sampling approach (Jang et al., 2016) to learn the policy jointly with the network parameters through standard back-propagation, without resorting to complex reinforcement learning as in (Wu et al., 2019b; Fan et al., 2018; Yeung et al., 2016). We design the loss to achieve both competitive performance and resource efficiency required for action recognition. Extensive experiments on multiple benchmarks show that AdaFuse significantly reduces the computation without accuracy loss.
The main contributions of our work are as follows:
• We propose a novel approach that automatically determines which channels to keep, reuse or skip per layer and per target instance for efficient action recognition.
• Our approach is model-agnostic, which allows this to be served as a plugin operation for a wide range of 2D CNN-based action recognition architectures.
• The overall policy distribution can be seen as an indicator for the dataset characteristic, and the block-level distribution can bring potential guidance for future architecture designs.
• We conduct extensive experiments on four benchmark datasets (Something-Something V1 (Goyal et al., 2017), Something-Something V2 (Mahdisoltani et al., 2018), Jester (Materzynska et al., 2019) and Mini-Kinetics (Kay et al., 2017)) to demonstrate the superiority of our proposed approach over state-of-the-art methods.
2 RELATED WORK
Action Recognition. Much progress has been made in developing a variety of ways to recognize complex actions, by either applying 2D-CNNs (Karpathy et al., 2014; Wang et al., 2016; Fan et al., 2019) or 3D-CNNs (Tran et al., 2015; Carreira & Zisserman, 2017; Hara et al., 2018). Most successful architectures are usually based on the two-stream model (Simonyan & Zisserman, 2014), processing RGB frames and optical-flow in two separate CNNs with a late fusion in the upper layers (Karpathy et al., 2014) or further combining with other modalities (Asghari-Esfeden et al., 2020; Li et al., 2020a). Another popular approach for CNN-based action recognition is the use of 2D-CNN to extract frame-level features and then model the temporal causality using different aggregation modules such as temporal averaging in TSN (Wang et al., 2016), a bag of features scheme in TRN (Zhou et al., 2018), channel shifting in TSM (Lin et al., 2019), depthwise convolutions in TAM (Fan et al., 2019), non-local neural networks (Wang et al., 2018a), temporal enhancement and interaction module in TEINet (Liu et al., 2020), and LSTMs (Donahue et al., 2015). Many variants of 3D-CNNs such as C3D (Tran et al., 2015; Ji et al., 2013), I3D (Carreira & Zisserman, 2017) and ResNet3D (Hara et al., 2018), that use 3D convolutions to model space and time jointly, have also been introduced for action recognition. SlowFast (Feichtenhofer et al., 2018) employs two pathways to capture temporal information by processing a video at both slow and fast frame rates. Recently, STM (Jiang et al., 2019) proposes new channel-wise convolutional blocks to jointly capture spatio-temporal and motion information in consecutive frames. TEA (Li et al., 2020b) introduces a motion excitation module including multiple temporal aggregation modules to capture both short- and long-range temporal evolution in videos. Gate-Shift networks (Sudhakaran et al., 2020) use spatial gating for spatial-temporal decomposition of 3D kernels in Inception-based architectures.
While extensive studies have been conducted in the last few years, limited efforts have been made towards efficient action recognition (Wu et al., 2019b;a; Gao et al., 2020). Specifically, methods for efficient recognition focus on either designing new lightweight architectures that aim to reduce the complexity by decomposing the 3D convolution into 2D spatial convolution and 1D temporal convolution (e.g., R(2+1)D (Tran et al., 2018), S3D (Xie et al., 2018), channel-separated CNNs (Tran et al., 2019)) or selecting salient frames/clips conditioned on the input (Yeung et al., 2016; Wu et al., 2019b; Korbar et al., 2019; Gao et al., 2020). Our approach is most related to the latter which focuses on conditional computation and is agnostic to the network architecture used for recognizing actions. However, instead of focusing on data sampling, our approach dynamically fuses channels from current and history feature maps to reduce the computation. Furthermore, as feature maps can be redundant or noisy, we use a skipping operation to make it more efficient for action recognition.
Conditional Computation. Many conditional computation methods have been recently proposed with the goal of improving computational efficiency (Bengio et al., 2015; 2013; Veit & Belongie, 2018; Wang et al., 2018b; Graves, 2016; Meng et al., 2020; Pan et al., 2021). Several works have been
proposed that add decision branches to different layers of CNNs to learn whether to exit the network for faster inference (Figurnov et al., 2017; McGill & Perona, 2017; Wu et al., 2020). BlockDrop (Wu et al., 2018) effectively reduces the inference time by learning to dynamically select which layers to execute per sample during inference. SpotTune (Guo et al., 2019) learns to adaptively route information through finetuned or pre-trained layers. Conditionally parameterized convolutions (Yang et al., 2019) or dynamic convolutions (Chen et al., 2019a; Verelst & Tuytelaars, 2019) have also been proposed to learn specialized convolutional kernels for each example to improve efficiency in image recognition. Our method is also related to recent works on dynamic channel pruning (Gao et al., 2018; Lin et al., 2017) that generate decisions to skip the computation for a subset of output channels. While GaterNet (Chen et al., 2019b) proposes a separate gating network to learn channel-wise binary gates for the backbone network, Channel gating network (Hua et al., 2019) identifies regions in the features that contribute less to the classification result, and skips the computation on a subset of the input channels for these ineffective regions. In contrast to the prior works that focus on only dropping unimportant channels, our proposed approach also reuses history features when necessary to make the network capable for strong temporal modelling.
3 METHODOLOGY
In this section, we first show the general approach using 2D-CNN for action recognition. Then we present the concept of adaptive temporal fusion and analyze its computation cost. Finally, we describe the end-to-end optimization and network specifications.
Using 2D-CNN for Action Recognition. One popular solution is to first generate frame-wise predictions and then utilize a consensus operation to get the final prediction (Wang et al., 2016). The network takes uniformly sampled T frames {X1...XT } and predicts the un-normalized class score:
P (X1, ..., XT ; Θ) = G (F(X1; Θ),F(X2; Θ), ...,F(XT ; Θ)) (1)
where F(·; Θ) is the 2D-CNN with learnable parameters Θ. The consensus function G reduces the frame-level predictions to a final prediction. One common practice for G is the averaging operation. The major drawback is that this cannot capture the order of the frames. The network performs poorly on datasets that contain temporal-related labels (e.g. “turning left”, “moving forward”, etc). LSTM (Hochreiter & Schmidhuber, 1997) can also be used as G to get the final prediction (Donahue et al., 2015), but it cannot capture low-level features across the frames, as mentioned in Lin et al. (2019). A few works have been recently proposed to model temporal causality using a bag of features scheme in TRN (Zhou et al., 2018), channel shifting in TSM (Lin et al., 2019), depthwise convolutions in TAM (Fan et al., 2019). Different from these methods, in this work, we hypothesis that an inputdependent fusion of framewise features will be beneficial for temporal understanding and efficiency, as the amount of temporal information depends on the dynamics and the type of events in the video. Hence we propose adaptive temporal fusion for action recognition.
Adaptive Temporal Fusion. Consider a single 2D convolutional layer: yt = φ(Wx ∗xt+bx), where xt ∈ Rc×h×w denotes the input feature map at time step t with c channels and spatial dimension h × w, and yt ∈ Rc ′×h′×w′ is the output feature map. Wx ∈ Rc ′×k×k×c denotes the convolution filters (with kernel size k× k) and bx ∈ Rc ′
is the bias. We use “∗” for convolution operation. φ(·) is the combination of batchnorm and non-linear functions (e.g. ReLU (Nair & Hinton, 2010)).
We introduce a policy network consisting of two fully-connected layers and a ReLU function designed to adaptively select channels for keeping, reusing or skipping. As shown in Figure 1, at time t, we first generate feature vectors vt−1, vt ∈ Rc from history feature map xt−1 and current feature map xt via global average pooling. Then the policy network predicts:
pt = g(vt−1, vt; Θg) (2)
where pt ∈ {0, 1, 2}c ′
is a channel-wise policy (choosing “keep”, “reuse” or “skip”) to generate the output feature map: if pit = 0, the i-th channel of output feature map will be computed via the normal convolution; if pit = 1, it will reuse the i-th channel of the feature map yt−1 which has been already computed at time t − 1; otherwise, the i-th channel will be just padded with zeros. Formally, this output feature map can be written as ỹt = f(yt−1, yt, pt) where the i-th channel is:
ỹit = 1 [ pit = 0 ] · yit + 1 [ pit = 1 ] · yit−1 (3)
here 1 [·] is the indicator function. In Figure 1, the policy network instructs the convolution layer to only compute the first and fourth channels, reuses the second channel of the history feature and skips the third channel. Features from varied time steps are adaptively fused along the channel dimension.
Adaptive temporal fusion enables the 2D convolution to capture temporal information: its temporal perceptive field grows linearly to the depth of the layers, as more features from different time steps are fused when going deeper in the network. Our novel design can be seen as a general methodology for many state-of-the-art 2D-CNN approaches: if we discard "skip" and use a predefined fixed policy, then it becomes the online temporal fusion in Lin et al. (2019). If the policy only chooses from "skip" and "keep", then it becomes dynamic pruning methods (Gao et al., 2018; Hua et al., 2019). Our design is a generalized approach taking both temporal modelling and efficiency into consideration.
Complexity Analysis. To illustrate the efficiency of our framework, we compute the floating point operations (FLOPS), which is a hardware-independent metric and widely used in the field of efficient action recognition1(Wu et al., 2019b; Gao et al., 2020; Meng et al., 2020; Fan et al., 2019). To compute saving from layers before and after the policy network, we add another convolution after ỹt with kernel Wy ∈ Rc ′′×k′×k′×c′ and bias by ∈ Rc ′′
. The total FLOPS for each convolution will be:{ mx = c
′ · h′ · w′ · (k · k · c+ 1) my = c ′′ · h′′ · w′′ · (k′ · k′ · c′ + 1) (4)
When the policy is applied, only those output channels used in time t or going to be reused in time t+ 1 need to be computed in the first convolution layer, and only the channels not skipped in time t count for input feature maps for the second convolution layer. Hence the overall FLOPS is:
M = T−1∑ τ=0
[ 1
c′ c′−1∑ i=0
Keep at τ or resue at τ + 1︷ ︸︸ ︷ 1 [ piτ · (piτ+1 − 1) = 0
] ·mx︸ ︷︷ ︸
FLOPS from the first conv at time τ
+ (1− 1 c′ c′−1∑ i=0 Skip at τ︷ ︸︸ ︷ 1(piτ = 2)) ·my︸ ︷︷ ︸
FLOPS from the second conv at time τ
] (5)
Thus when the policy network skips more channels or reuses channels that are already computed in the previous time step, the FLOPS for those two convolution layers can be reduced proportionally.
Loss functions. We take the average of framewise predictions as the video prediction and minimize:
L = ∑
(x,y)∼Dtrain
[ −y log(P (x)) + λ ·
B−1∑ i=0 Mi
] (6)
1Latency is another important measure for efficiency, which can be reduced via CUDA optimization for sparse convolution (Verelst & Tuytelaars, 2019). We leave it for future research.
The first term is the cross entropy between one-hot encoded ground truth labels y and predictions P (x). The second term is the FLOPS measure for all the B temporal fusion blocks in the network. In this way, our network is learned to achieve both accuracy and efficiency at a trade-off controlled by λ.
Discrete policies for “keep”, “reuse” or “skip” shown in Eq. 3 and Eq. 5 make L non-differentiable hence hard to optimize. One common practice is to use a score function estimator (e.g. REINFORCE (Glynn, 1990; Williams, 1992)) to avoid backpropagating through categorical samplings, but the high variance of the estimator makes the training slow to converge (Wu et al., 2019a; Jang et al., 2016). As an alternative, we use Gumbel-Softmax Estimator to enable efficient end-to-end optimization.
Training using Gumbel Softmax Estimator. Specifically, the policy network first generates a logit q ∈ R3 for each channel in the output feature map and then we use Softmax to derive a normalized categorical distribution: π = {ri|ri = exp(qi)exp (q0)+exp (q1)+exp (q2)}. With the Gumbel-Max trick, discrete samples from the distribution π can be drawn as (Jang et al., 2016): r̂ = argmaxi(log ri+Gi), where Gi = − log(− logUi) is a standard Gumbel distribution with i.i.d. Ui sampled from a uniform distribution Unif(0, 1). Since the argmax operator is not differentiable, the Gumbel Softmax distribution is used as a continuous approximation. In forward pass we represent the discrete sample r̂ as a one-hot encoded vector and in back-propagation we relax it to a real-valued vector R = {R0, R1, R2} via Softmax as follows:
Ri = exp ((log ri +Gi)/τ)∑2 j=1 exp ((log rj +Gj)/τ)
(7)
where τ is a temperature factor controlling the “smooothness” of the distribution: lim τ→∞ R converges to a uniform distribution and lim
τ→0 R becomes a one-hot vector. We set τ = 0.67 during the training.
Network Architectures and Notations. Our adaptive temporal fusion module can be easily plugged into any existing 2D-CNN models. Specifically, we focus on BN-Inception (Ioffe & Szegedy, 2015), ResNet (He et al., 2016) and EfficientNet (Tan & Le, 2019). For Bn-Inception, we add a policy network between every two consecutive Inception modules. For ResNet/EfficientNet, we insert the policy network between the first and the second convolution layers in each “residual block"/“inverted residual block". We denote our model as AdaFuseMethodBackbone, where the “Backbone” is chosen from {“R18”(ResNet18), “R50”(ResNet50), “Inc”(BN-Inception), “Eff”(EfficientNet)}, and the “Method” can be {“TSN”, “TSM”, “TSM+Last”}. More details can be found in the following section.
4 EXPERIMENTS
We first show AdaFuse can significantly improve the accuracy and efficiency of ResNet18, BNInception and EfficientNet, outperforming other baselines by a large margin on Something-V1. Then on all datasets, AdaFuse with ResNet18 / ResNet50 can consistently outperform corresponding base models. We further propose two instantiations using AdaFuse on TSM (Lin et al., 2019) to compare with state-of-the-art approaches on Something V1 & V2: AdaFuseTSMR50 can save over 40% FLOPS at a comparable classification score under same amount of computation budget, AdaFuseTSM+LastR50 outperforms state-of-the-art methods in accuracy. Finally, we perform comprehensive ablation studies and quantitative analysis to verify the effectiveness of our adaptive temporal fusion.
Datasets. We evaluate AdaFuse on Something-Something V1 (Goyal et al., 2017) & V2 (Mahdisoltani et al., 2018), Jester (Materzynska et al., 2019) and a subset of Kinetics (Kay et al., 2017). Something V1 (98k videos) & V2 (194k videos) are two large-scale datasets sharing 174 human action labels (e.g. pretend to pick something up). Jester (Materzynska et al., 2019) has 27 annotated classes for hand gestures, with 119k / 15k videos in training / validation set. Mini-Kinetics (assembled by Meng et al. (2020)) is a subset of full Kinetics dataset (Kay et al., 2017) containing 121k videos for training and 10k videos for testing across 200 action classes.
Implementation details. To make a fair comparison, we carefully follow the training procedure in Lin et al. (2019). We uniformly sample T = 8 frames from each video. The input dimension for the network is 224× 224. Random scaling and cropping are used as data augmentation during training (and we further adopt random flipping for Mini-Kinetics). Center cropping is used during inference. All our networks are using ImageNet pretrained weights. We follow a step-wise learning rate scheduler with the initial learning rate as 0.002 and decay by 0.1 at epochs 20 & 40. To train
our adaptive temporal fusion approach, we set the efficiency term λ = 0.1. We train all the models for 50 epochs with a batch-size of 64, where each experiment takes 12∼ 24 hours on 4 Tesla V100 GPUs. We report the number of parameters used in each method, and measure the averaged FLOPS and Top1/Top5 accuracy for all the samples from each testing dataset.
Adaptive Temporal Fusion improves 2D CNN Performance. On Something V1 dataset, we show AdaFuse ’s improvement upon 2D CNNs by comparing with several baselines as follows:
• TSN (Wang et al., 2016): Simply average frame-level predictions as the video-level prediction. • CGNet (Hua et al., 2019): A dynamic pruning method to reduce computation cost for CNNs. • Threshold: We keep a fixed portion of channels base on their activation L1 norms and skip the
channels in smaller norms. It serves as a baseline for efficient recognition. • RANDOM: We use temporal fusion with a randomly sampled policy (instead of using learned
policy distribution). The distribution is chosen to match the FLOPS of adaptive methods. • LSTM: Update per-frame predictions by hidden states in LSTM and averages all predictions as
the video-level prediction.
We implement all the methods using publicly available code and apply adaptive temporal fusion in TSN using ResNet18, BN-Inception and EfficientNet backbones, denoting them as AdaFuseTSNR18 AdaFuseTSNInc and AdaFuseTSNEff-x respectively (“x” stands for different scales of the EfficientNet backbones). As shown in Table 1, AdaFuseTSNR18 uses the similar FLOPS as those efficient methods (“CGNet” and “Threshold”) but has a great improvement in classification accuracy Specifically, AdaFuseTSNR18 and AdaFuseTSNInc outperform corresponding TSN models by more than 20% in Top-1 accuracy, while using only 74% of FLOPS. Interestingly, comparing to TSN, even temporal fusion with a random policy can achieve an absolute gain of 12.7% in accuracy, which shows that temporal fusion can greatly improve the action recognition performance of 2D CNNs. Additionally equipped with the adaptive policy, AdaFuseTSNR18 can get 9.4% extra improvement in classification. LSTM is the most competitive baseline in terms of accuracy, while AdaFuseTSNR18 has an absolute gain of 8.5% in accuracy and uses only 70% of FLOPS. When using a more efficient architecture as shown in Table.2, our approach can still reduce 10% of the FLOPS while improving the accuracy by a large margin. To further validate AdaFuse being model-agnostic and robust, we conduct extensive experiments using ResNet18 and
ResNet50 backbones on Something V1 & V2, Jester and Mini-Kinetics. As shown in Table 3, AdaFuseTSNR18 and AdaFuseTSNR50 consistently outperform their baseline TSN and LSTM models with a 35% saving in FLOPS on average. Our approach harvests large gains in accuracy and efficiency on temporal-rich datasets like Something V1 & V2 and Jester. When comes to Mini-Kinetics, AdaFuse can still achieve a better accuracy with 20%∼33% computation reduction. Comparison with Adaptive Inference Method. We compare our approach with AR-Net (Meng et al., 2020), which adaptively chooses frame resolutions for efficient inference. As shown in Table 4, on Something V1, Jester and Mini-Kinetics, we achieve a better accuracy-efficiency trade-off than AR-Net while using 40% less parameters. On temporal-rich dataset like Something-V1, our approach attains the largest improvement, which shows AdaFuseTSNR50 ’s capability for strong temporal modelling.
Comparison with State-of-the-Art Methods. We apply adaptive temporal fusion with different backbones (ResNet50 (He et al., 2016), BN-Inception (Ioffe & Szegedy, 2015)) and designs (TSN (Wang et al., 2016), TSM (Lin et al., 2019)) and compare with State-of-the-Art methods on Something V1 & V2. As shown in Table 5, using BN-Inception as backbone, AdaFuseTSNInc is 4% better than “TRNMultiscale” (Zhou et al., 2018) in accuracy, using only 75% of the FLOPS. AdaFuseTSNR50 with ResNet50 can even outperform 3D CNN method “I3D” (Carreira & Zisserman, 2017) and hybrid 2D/3D CNN method “ECO” (Zolfaghari et al., 2018) with much less FLOPS.
As for adaptive temporal fusion on “TSM” (Lin et al., 2019), AdaFuseTSMR50 achieves more than 40% savings in computation but at 1% loss in accuracy (Table 5). We believe this is because TSM uses temporal shift operation, which can be seen as a variant of temporal fusion. Too much temporal fusion could cause performance degradation due to a worse spatial modelling capability. As a remedy, we just adopt adaptive temporal fusion in the last block in TSM to capture high-level semantics (more intuition can be found later in our visualization experiments) and denote it as AdaFuseTSM+LastR50 . On Something V1 & V2 datasets, AdaFuseTSM+LastR50 outperforms TSM and all other state-of-the-art methods in accuracy with a 5% saving in FLOPS comparing to TSM. From our experiments, we observe that the performance of adaptive temporal fusion depends on the position of shift modules in TSM and optimizing the position of such modules through additional regularization could help us not only to achieve better accuracy but also to lower the number of parameters. We leave this as an interesting future work.
We depict the accuracy, computation cost and model sizes in Figure 2. All the results are computed from Something V1 validation set. The graph shows GFLOPS / accuracy on x / y-axis and the diameter of each data point is proportional to the number of model parameters. AdaFuse (blue points) owns the best trade-off for accuracy and efficiency at a comparable model size to other 2D CNN approaches. Once again it shows AdaFuse is an effective and efficient design for action recognition.
Policy Visualizations. Figure 3 shows overall policy (“Skip”, “Reuse” and “Keep”) differences across all datasets. We focus on the quotient of “Reuse / Keep” as it indicates the mixture ratio for feature fusion. The quotients on Something V1&V2 and Jester datasets are very high (0.694, 0.741 and 0.574 respectively) when comparing to Mini-Kinetics (0.232). This is probably because the first three datasets contain more temporal relationship than Kinetics. Moreover, Jester has the highest percentage in skipping which indicates many actions in this dataset can be correctly recognized with few channels: Training on Jester is more biased towards optimizing for efficiency as the accuracy loss is very low. Distinctive policy patterns show different characteristics of datasets, which conveys a potential of our proposed approach to be served as a “dataset inspector”.
Figure 4 shows a more fine-grained policy distribution on Something V2. We plot the policy usage in each residual block inside the ResNet50 architecture (shown in light red/orange/blue) and use 3rd-order polynomials to estimate the trend of each policy (shown in black dash curves). To further study the time-sensitiveness of the policies, we calculate the number of channels where the policies stay unchanged across the frames in one video (shown in dark red/orange/blue). We find earlier layers tend to skip more and reuse/keep less, and vice versa. The first several convolution blocks normally capture low-level feature maps in large spatial sizes, so the “information density” on channel dimension should be less which results in more redundancy across channels. Later blocks often capture high-level semantics and the feature maps are smaller in spatial dimensions, so the “semantic density” could be higher and less channels will be skipped. In addition, low-level features change faster across the frames (shades, lighting intensity) whereas high-level semantics change slowly across the frames (e.g. "kicking soccer"), that’s why more features can be reused in later layers to avoid computing the same semantic again. As for the time-sensitiveness, earlier layers tend to be less sensitive and vice versa. We find that “reuse” is the most time-sensitive policy, as “Reuse (Instance)” ratio is very low, which again shows the functioning of adaptive temporal fusion. We believe these findings will provide insights to future designs of effective temporal fusions.
How does the adaptive policy affect the performance? We consider AdaFuseTSNR18 on Something V1 dataset and break down by using “skip”, ‘reuse” and adaptive (Ada.) policy learning. As shown
Table 6: Effect of different policies (using AdaFuseTSNR18 ) on Something V1 dataset.
Method Skip Reuse Ada. FLOPS Top1
TSN 8 8 8 14.6G 14.8 Ada. Skip 4 8 4 6.6G 9.5
Ada. Reuse 8 4 4 13.8G 36.3 Random 4 4 8 10.4G 27.5 AdaFuseTSNR18 4 4 4 10.3G 36.9
Table 7: Effect of hidden sizes and efficient weights on the performance of AdaFuseTSM+LastR50 on SthV2.
#Hidden Units λ #Params FLOPS Top1 Skip Reuse
1024 0.050 39.1M 31.53G 59.71 13% 14% 1024 0.075 39.1M 31.29G 59.75 15% 13% 1024 0.100 39.1M 31.04G 59.40 18% 12% 2048 0.100 54.3M 30.97G 59.96 21% 10% 4096 0.100 84.7M 31.04G 60.00 25% 8%
in Table 6, “Ada. Skip” saves 55% of FLOPS comparing to TSN but at a great degradation in accuracy. This shows naively skipping channels won’t give a better classification performance. “Ada. Reuse” approach brings 21.5% absolute gain in accuracy, which shows the importance of temporal fusion. However, it fails to save much FLOPS due to the absence of skipping operation. Combining “Keep” with both “Skip” and “Reuse” via just a random policy is already achieving a better trade-off comparing to TSN, and by using adaptive learning approach, AdaFuseTSNR18 reaches the highest accuracy with the second-best efficiency. In summary, the “Skip” operation contributes the most to the computation efficiency, the “Reuse” operation boosts the classification accuracy, while the adaptive policy adds the chemistry to the whole system and achieves the best performance.
How to achieve a better performance? Here we investigate different settings to improve the performance of AdaFuseTSM+LastR50 on Something V2 dataset. As shown in Table 7, increasing λ will obtain a better efficiency but might result in accuracy degradation. Enlarging the number of hidden units for the policy network can get a better overall performance: as we increase the size from 1024 to 4096, the accuracy keeps increasing. When the policy network grows larger, it learns to skip more to reduce computations and to reuse history features wisely for recognition. But notice that the model size grows almost linearly to hidden layer sizes, which leads to a considerable overhead to the FLOPS computation. As a compromise, we only choose λ = 0.75 and hidden size 1024 for AdaFuseTSM+LastR50 . We leave the design for a more advanced and delicate policy module for future works.
Runtime/Hardware. Sparse convolutional kernels are often less efficient on current hardwares, e.g., GPUs. However, we strongly believe that it is important to explore models for efficient video action recognition which might guide the direction of new hardware development in the years to come. Furthermore, we also expect wall-clock time speed-up in the inference stage via efficient CUDA implementation, which we anticipate will be developed.
5 CONCLUSIONS
We have shown the effectiveness of adaptive temporal fusion for efficient video recognition. Comprehensive experiments on four challenging and diverse datasets present a broad spectrum of accuracyefficiency models. Our approach is model-agnostic, which allows it to be served as a plugin operation for a wide range of architectures for video recognition tasks.
Acknowledgements. This work is supported by the Intelligence Advanced Research Projects Activity (IARPA) via DOI/IBC contract number D17PC00341. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. This work is also partly supported by the MIT-IBM Watson AI Lab.
Disclaimer. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DOI/IBC, or the U.S. Government. | 1. What is the main contribution of the paper regarding AdaFuse networks for action recognition?
2. What are the strengths of the proposed approach, particularly in terms of novelty and efficiency?
3. What are the weaknesses of the paper, especially regarding computational savings and comparisons with state-of-the-art methods?
4. How does the reviewer assess the clarity, quality, and reproducibility of the paper's content? | Review | Review
In this work, the authors introduce an AdaFuse network for efficiency action recognition in videos. Specifically, they design a policy net to decide which channels should be kept, reused or skipped, according to the input features of two adjacent frames.
Strength
1 The paper is written well, and the organization is OK
2 The idea of adaptive temporal fusion is somehow novel and interesting
Weakness
1 How to save computation. I understand the general idea of saving computation, if some channels are reused or skipped. However, in the training phase, the policy net would produce the real-value vector by Eq. (7), instead of the one-hot vector. In other words, the 'keep' entry for each channel is always used during training. Then, I guess computation saving is not claimed for training. It is for testing, right? How to do test? The policy net produces the real-value vector and then you make it as one-hot vector for saving computation?
2 Missing SOTA. Compared with this paper, many recent approaches can achieve a competitive computation with better accuracy. It significantly reduces the potential value of this paper.
*Jiang et al., STM: SpatioTemporal and Motion Encoding for Action Recognition, ICCV 2019
*Li et al., TEA: Temporal Excitation and Aggregation for Action Recognition, CVPR 2020
*Sudhakaran et al., Gate-Shift Networks for Video Action Recognition, CVPR2020
*Liu et al., TEINet: Towards an efficient architecture for video recognition, AAAI 2020
3 Please correct the abstract. The experiments are performed on mini-Kinetics, rather than Kinetics. I indeed suggest that, it would be better to perform the proposed method on Kinetics to further show the effectiveness. |
ICLR | Title
Predicting Inductive Biases of Pre-Trained Models
Abstract
Most current NLP systems are based on a pre-train-then-fine-tune paradigm, in which a large neural network is first trained in a self-supervised way designed to encourage the network to extract broadly-useful linguistic features, and then finetuned for a specific task of interest. Recent work attempts to understand why this recipe works and explain when it fails. Currently, such analyses have produced two sets of apparently-contradictory results. Work that analyzes the representations that result from pre-training (via “probing classifiers”) finds evidence that rich features of linguistic structure can be decoded with high accuracy, but work that analyzes model behavior after fine-tuning (via “challenge sets”) indicates that decisions are often not based on such structure but rather on spurious heuristics specific to the training set. In this work, we test the hypothesis that the extent to which a feature influences a model’s decisions can be predicted using a combination of two factors: The feature’s extractability after pre-training (measured using information-theoretic probing techniques), and the evidence available during finetuning (defined as the feature’s co-occurrence rate with the label). In experiments with both synthetic and naturalistic data, we find strong evidence (statistically significant correlations) supporting this hypothesis.
1 INTRODUCTION
Large pre-trained language models (LMs) (Devlin et al., 2019; Raffel et al., 2020; Brown et al., 2020) have demonstrated impressive empirical success on a range of benchmark NLP tasks. However, analyses have shown that such models are easily fooled when tested on distributions that differ from those they were trained on, suggesting they are often “right for the wrong reasons” (McCoy et al., 2019). Recent research which attempts to understand why such models behave in this way has primarily made use of two analysis techniques: probing classifiers (Adi et al., 2017; Hupkes et al., 2018), which measure whether or not a given feature is encoded by a representation, and challenge sets (Cooper et al., 1996; Linzen et al., 2016; Rudinger et al., 2018), which measure whether model behavior in practice is consistent with use of a given feature. The results obtained via these two techniques currently suggest different conclusions about how well pre-trained representations encode language. Work based on probing classifiers has consistently found evidence that models contain rich information about syntactic structure (Hewitt & Manning, 2019; Bau et al., 2019; Tenney et al., 2019a), while work using challenge sets has frequently revealed that models built on top of these representations do not behave as though they have access to such rich features, rather they fail in trivial ways (Dasgupta et al., 2018; Glockner et al., 2018; Naik et al., 2018).
In this work, we attempt to link these two contrasting views of feature representations. We assume the standard recipe in NLP, in which linguistic representations are first derived from large-scale selfsupervised pre-training intended to encode broadly-useful linguistic features, and then are adapted for a task of interest via transfer learning, or fine-tuning, on a task-specific dataset. We test the
hypothesis that the extent to which a fine-tuned model uses a given feature can be explained as a function of two metrics: The extractability of the feature after pre-training (as measured by probing classifiers) and the evidence available during fine-tuning (defined as the rate of co-occurrence with the label). We first show results on a synthetic task, and second using state-of-the-art pre-trained LMs on language data. Our results suggest that probing classifiers can be viewed as a measure of the pre-trained representation’s inductive biases: The more extractable a feature is after pre-training, the less statistical evidence is required in order for the model to adopt the feature during fine-tuning.
Contribution. This work establishes a relationship between two widely-used techniques for analyzing LMs. Currently, the question of how models’ internal representations (measured by probing classifiers) influence model behavior (measured by challenge sets) remains open (Belinkov & Glass, 2019; Belinkov et al., 2020). Understanding the connection between these two measurement techniques can enable more principled evaluation of and control over neural NLP models.
2 SETUP AND TERMINOLOGY
2.1 FORMULATION
Our motivation comes from McCoy et al. (2019), which demonstrated that, when fine-tuned on a natural language inference task (Williams et al., 2018, MNLI), a model based on a state-of-the-art pre-trained LM (Devlin et al., 2019, BERT) categorically fails on test examples which defy the expectation of a “lexical overlap heuristic”. For example, the model assumes that the sentence “the lawyer followed the judge” entails “the judge followed the lawyer” purely because all the words in the latter appear in the former. While this heuristic is statistically favorable given the model’s training data, it is not infallible. Specifically, McCoy et al. (2019) report that 90% of the training examples containing lexical overlap had the label “entailment”, but the remaining 10% did not. Moreover, the results of recent studies based on probing classifiers suggest that more robust features are extractable with high reliability from BERT representations. For example, given the example “the lawyer followed the judge”/“the judge followed the lawyer”, if the model can represent that “lawyer” is the agent of “follow” in the first sentence, but is the patient in the second, then the model should conclude that the sentences have different meanings. Such semantic role information can be recovered at > 90% accuracy from BERT embeddings (Tenney et al., 2019b). Thus, the question is: Why would a model prefer a weak feature over a stronger one, if both features are extractable from the model’s representations and justified by the model’s training data?
Abstracting over details, we distill the basic NLP task setting described above into the following, to be formalized in the Section 2.2. We assume a binary sequence classification task where a target feature t perfectly predicts the label (e.g., the label is 1 iff t holds). Here, t represents features which actually determine the label by definition, e.g., whether one sentence semantically entails another. Additionally, there exists a spurious feature s that frequently co-occurs with t in training but is not guaranteed to generalize outside of the training set. Here, s (often called a “heuristic” or “bias” elsewhere in the literature) corresponds to features like lexical overlap, which are predictive of the label in some datasets but are not guaranteed to generalize.
Assumptions. In this work, we assume there is a single t and a single s; in practice there may be many s features. Still, our definition of a feature accommodates multiple spurious or target features. In fact, some of our spurious features already encompass multiple features: the lexical feature, for example, is a combination of several individual-word features because it holds if one of a set of words is in the sentence. This type of spurious feature is common in real datasets: E.g., the hypothesis-only baseline in NLI is a disjunction of lexical features (with semantically unrelated words like “no”, “sleeping”, etc.) (Poliak et al., 2018b; Gururangan et al., 2018).
We assume that s and t frequently co-occur, but that only s occurs in isolation. This assumption reflects realistic NLP task settings since datasets always contain some heuristics, e.g., lexical cues, cultural biases, or artifacts from crowdsourcing (Gururangan et al., 2018). Thus, our experiments focus on manipulating the occurrence of s alone, but not t alone: This means giving the model evidence against relying on s. This is in line with prior applied work that attempts to influence model behavior by increasing the evidence against s during training (Elkahky et al., 2018; Zmigrod et al., 2019; Min et al., 2020).
Published as a conference paper at ICLR 2021
2.2 DEFINITIONS
Let X be the set of all sentences and S be the space of all sentence-label pairs (x, y) ∈ X × {0, 1}. We use D ⊂ S to denote a particular training sample drawn from S. We define two types of binary features: target (t) and spurious (s). Each is a function from sentences x ∈ X to a binary label {0, 1} that indicates whether the feature holds.
Target and spurious features. The target feature t is such that there exists some function f : {0, 1} → {0, 1} such that ∀(x, y) ∈ S, f(t(x)) = y. In other words, the label can always be perfectly predicted given the value of t.1 A feature s is spurious if it is not a target feature.
Partitions of S. To facilitate analysis, we partition S in four regions (Figure 1). We define Ss-only to be the set of examples in which the spurious feature occurs alone (without the target). Similarly, St-only is the set of examples in which the target occurs without the spurious feature. Sboth and Sneither are analogous. For clarity, we sometimes drop the S∗ notation (e.g., s-only in place of Ss-only).
t-only both s-only
neither
S D
(a) (b)
Sboth “ tpx, yq | tpxq “ 1 ^ spxq “ 1u Sneither “ tpx, yq | tpxq “ 0 ^ spxq “ 0u St-only “ tpx, yq | tpxq “ 1 ^ spxq “ 0u Ss-only “ tpx, yq | tpxq “ 0 ^ spxq “ 1u
<latexit sha1_base64="bSiqwQANSN306jL/eI6aFqdfpQw=">AAA00nicnVtbc9zKcaYcx3EYxzlOHv2CCiX7nBTFInWYil9cZd5JcXlfXqSzOiosthcLETdhZrGk4E1VKi9+8Evyg/I//F/8kJ4ZDLoHC/LEYZVE9PfNNOby9XQvFhzmcSTk+vqfXvzor3781z/5m5/+7fLf/ezvf/4PX/3iH29ENi0CuA6yOCvuhr6AOErhWkYyhru8AD8ZxnA7vN9R/G0JhYiytC8fc/iQ+GEajaPAlwh9/OrPHv5cfawGEh5kkVQvh5mcvJzPf/XbQfX1w+rjN4PV3w9W5dcP3/x2YzCDUQieMMbcGwyW271TiOQEim4H69zBercD+fJ1lsaPPzyAJ/qLZ/qvtyaw/PGrlfW1df3jLV5s1BcrS/XP+cdfvPmfwSgLpgmkMoh9Ib7bWM/lh8ovZBTEMF8eTAXkfnDvh/AdXqZ+AuJDpbdp7r1CZOSNswL/pdLTKO9R+YkQj8kQWya+nIg2p8Au7rupHP/mQxWl+VRCGpgbjaexJzNP7bk3igoIZPyIF35QRDhWL5j4hR9IVIZzl2GW3Ut/KNCHt68HWiQ+OvJRT25LNQqZZbFuGqACxfICX4ixmHfO0AHFdDiOwrnnzCnVd4bkQzXF/7s6rHrjOPPlfOG2/vBhNSz8fBIFDy4rs1GWZhJao5phYzWCp2bQuRFtUO3p/2W2z4/w/+N9efmV+vFO9269k63+obe7t390etQ/Oju98jTVOYNV63Z1iOu7/Mo78Yt7T6BS8GQQXjbGbc3NtdJsAWMoiigNlaxGURkJ2wyXblooeaQwC7Ik8dNRNUAwhrGcV9UAEu/rHl5/M58vtAkwkqCwrXa01dWuiMJJ4+xSGV2tZJbbNv0s72qBx5vMEttoW1sL7ep5+7aZ/1SLoW0xfKpFYFsET7UY2RYj1QK34RBnF6sZer6H7VXYwhiP95GHa5O4PvBagfPvNj6gl+HYW9mYa0F4+3pTzK7hoQCrXpzNoHitQnVteYAu9bLCeGWjMhv47wO0Ku2gqzsON5J+vKbPBSHxyFN7L9SOIW887luP+22PmpazzN5z5U19V+HZRh7OqDbe2B6fp/6Iuqx8u7K50G216WOvvuWuNvV0royqn10OVL4ZfB0Cznp0OLALYnpf2d5XHb0vbS99JM+yJsrWmoUxdxd6ZZoYfGJp2g4nBUDbJfO38u2iR1o15vvbRd9+6gFugurcsWTw2czZNnl60o6faZ5DoZOGcbNXu9nrcrPlFf6M1r3l7PXr136ZRSNvKtTRFI29PBMiwnRlXOexj6FT+396dCof5hhJHXNUjOlet/mLJ1k72mkc7fygI5xzipWKOoNNW2F8aLgZEUrF0tbV69dPygRH58dhhvl/knTMEzkzuqbRsxNlrhZmumVdbXW4soK398NJNL6ePwz6TqetH+y0sKg5Hlr1zJn6FGqGq66e2xTTv63e86b/udvfzrS5AY5aXT854FpwEMVKrLG6wAMdG6ir2h/WPFmhaX1leH1ZN0BqmFQb7WwjCwyEeTVQmT/w42q33aD042jEG3w011hVG2q+4BKE7O6gmXkzI8iFynG5iOIsrfPTJbrIEq/0i6guLrW+QfqVqeZNAVi9HCD0cm6Xs2jRPjFDlxkSE7hMQMzIZUbEgMsAMWOXGRMTukxIzMRlJsRELhMR88llPhFz7zL3xMQuE8+1jIvEiwRGLH4wHD2qw87s4Kr3aSqkhyXxr6WnPqqgHB/VyeNsjJfUvlPXd0p3zVwmIyZ3mZyYzy7zmZjCZQpihMsIYqTLSGKmLjMlpnSZkpiZy8yIeXCZB2IeXeaRmC8u82VuyjwbAJiZs+Z4L+sgqUwoDccsbJpx4ydsHSW2hbYZzzgODwlmsVEGBLPAKEcEs6gogWAWEuWYYBYPZUgwC4ZyQjCLhHJKMAuD8hPBLAbKe4JZAJQxwTGDE4ITBrOF5iucEczEXOYEMyWXnwlmMi4LgpmGS0Gw4JtKsOxeEy7dkmCm23JGMBNt+UAwU2z5SDCTa/mFYKvVPfwIjoW+/rRXdOgWjOg6z2Uwyus8mcHIr/NsBqPBztMZjBA7z2cwauw8ocFIsvOMBqPLzlMauSfPaTAK7Typwci086wGo9X2aW25xOUSzj15EoORbudZDEa/nacxGBF3nsdglNx5IoORc+eZDEbTnacyGGF3nstg1N15MoOReOfZDEbnnaczGLF3ns9gFP/0CY2xUERBU6EkWxQfWxQ2yTbB2wzeIXiHwbsE7zJ4j+A9Bu8TvM/gA4IPGHxI8CGDjwg+YvBbgt8y+JjgYwb3CO4x+ITgEwafEnzK4DOCzxh8TvA5gy8IvmDwJcGXDL4i+IrBfYL7DL4m+JrBNwTfMPiW4FsG3xF8x+B3BL9j8HuC3z99vLqiA6M6ptEtpl8tPcZtc27H5XY4t+tyu5zbc7k9zu273D7nDlzugHOHLnfIuSOXO+LcW5d7y7ljlzvmXM/lepw7cbkTzp263CnnzlzujHPnLnfOuQuXu+Dcpctdcu7K5a4413e5PueuXe6aczcud8O5W5e75dydy91x7p3LvePce5ezsr/hJUT5BfTnCPzsut70LbMUKvt51mLJ1ECDhJJGUxMr3K2HyxpmyNAgVIfoKgQRqj507YEI1RxlPRKqNHSdgQjVF7q6QISqCl1TIEK1hK4kEKEKQtcPiFDdoKsGRKha0LUCIjFbB4NQZaDrAkRStn4GoSpA1wCIUO7XmR8Ryvg63yNCeV5neUQEW3CDUE4v621hm1IahPK3zt6IUNbWORsRytU6UyNCGVrnZ0S6qlG3DC39OJ+o/da/GwWWw1ocWhcWpI9a9GSipmI/GY5UD3NBRJZAqHD9m2AtSSVHC6BDRPB/gkQUJqqr/k2wFW4t2mYiVcXHXymxWgvFGpCFQh2xSVVKoNZCgY7JQnGGZKEwJ2ThcNlYUZCfyEIx3rO1qZQIm5lXSoDWwsVkq4jiy9iSVEp01kLRfSYLBVewlaqU0JoFqpTIrIULzZYZBVaSheKakYXCeiALRfVIFgrqy7z+zgvz7IPBdY5FnVFu1ZkVEcqoOp8iQnlUZ1FEKHvq3IkI5UydMRGhTKnzJCKUH3V2RISyos6JiFAu1JkQEcqAOv8hQnlPZz1EKNvpXIcI5Tid4RChzKbzGiKUz3Q2Q4SymM5hiFDu0pkLEcpYOl8hQnlKZylEKDvp3IQI5SSdkRChTKTzECKUf3T2QYSyjs45iFCu0ZkGkfdsBykvDHlaSM4n9UE8wCu2ejb0FdOrw7+ZXB3DirsycaxV1IdUqO/ydyGI/QJQVJMtdQLhHU2xJ8aRelQKaZCNojREZ/40VogYN9fJvBLqKe8VyKccDLN49ENuhg9zDML2k9pU6O8ITd6s/emn1PXUpKkvU8HUL7ctRvqXOxajCJC7FqMYkHsWoyiQ+xajOJAHFqNIkIcWo1iQRxajaJBvLUbxII8tRhEhexajmJAnFqOokKcWo7iQZxajyJDnFqPYkBcWo+iQlxaj+JBXFqMIkX2LUYzIa4tRlMgbi1GcyFuLUaTIO4tRrMh3FqNoke8tZioyFPKBeinBsKH9nBs4HzfCbQaTLsIdBpM0wl0GkzrCPQaTQMJ9BpNGwgMGk0zCQwaTUsIjBpNYwrcMJr2ExwwmyYQ9BpNqwhMGk3DCUwaTdsIzBpN8wnMGk4LCCwaTiMJLBpOOwisGk5TCPoNJTeE1g0lQ4Q2DSVPhLYNJVuEdg0lZ4TsGk7jC9wy2FT8ebXWpJpqnKEMmLrFNKGlL7BBK0hK7hGplvfJ29TcZUwGe7wmQHt46hpG3t+oNIfAVLieR8GbZNB4hhBZ4Qn/vgbXktPDU6zRZjI7U+y7wkGNtqb/MtV+p79MdSZ3igFASpzgklLQpjgglaYq3hJIyxTGhJEzRI5R0KU4IJVmKU0JJleKMUBKlOCeUNCkuCCVJiktCSZHiilASpOgTSnoU14SSHMUNoaRGcUsoiVHcEUpaFO8IJSmK94Q2j1xSrPtAf4TwzcOWuggEqgB6bvGvysMtslCq22ShRHfIQmnukoWH3R5ZKKJ9slA8B2ShaA7JQrEckYUieUsWiuOYLBRFjywUwwlZKIJTsnDzz8jCTT8nCzf7gizc5EuycHOvyMJN7ZOFm3lNFm7iDVm4ebdk4abdkYWb9Y4s3KT37H51pVVXWWrLgG+ZNBUXHikqfvX7lBjEBl31ZpGcZFPpYbnjqZf7cijcggioInKqofr2stGAbrhQCIIul6BVL4EumKBVMYEumaBVM4EumqBVNYEum6BVN4EunKBVOYEunaBVO4EunqBVPYEun6BVP4EuoKBVQYEuoaBVQ4EuoqBVRYEuo6BVR4EupKBVSYEupaBVS4EupqBVTYEup6BVT4EuqKBVUYEuqaBVU4EuqqBVVYEuq6BVV4EurKBVWYEuraBVW4EurqBVXYEur6BVX4EusIBVWPhJAVOOLKbgTdMRFPGjemlp5EvfCyGFArONsiOBSh9OVepxZZurpvMq/1gNiqTShk58yiskeVREmPKc/s27g8NHne70ayDqJpgfW77tGyITX+IndfcWTstz3vJ83jWYJBtB/NxEdINmJsZauE/d6Py5RrmM4hHULQfaaEbf9MBjQmbBxBfq1Wd/KjP9CQoKZ4StF1hz06YZY91lcQAjcNoZs6NdgQQeOradMVELgX6G5jaO/Tz2A5g3b9T0akC9bV1fu8vr9t+bNxlvrz2QnmAv7fTa7OWc5/bWqZnkbI1bZFzM7XM3lyggnDdP0tpUIGmOyorGERRt1yIby8R/oJYWaLfDZJHpl5jMQ7ZFL3k8VbP/op4EuOxxb87fYDruLWzgjV/QCJTR9i/xl1/g3hcZa3m1sAE7WUm0MpRAb7N4XPiJeiA1mWUFFqjCfxTey973b16qt3f0nw1MU/MWqshx/4V+e+zlAOKYtbEPRF9525gAMeRT9d8jxjsk6i02VQUbp6y1eoU0m4Y6Z+qiOJKwqt2LzBtloNzNovsoh1Hkr7VeQc6KJFYP7+dV7/v1eQeZpaC4jS5OznS/N11crpi8g9Fa6H0/iNKxfGyHTu4X6uEwHhu+CpYrwLNW+CF4UeqlWV3QS3hY83YmmVDLk6kCMJh4u/jZN4VfC0/9FcTasvM45yxXp3NW/AtqvAj1APD3YFVdPddQnZOmIV51u9RqxWb6/yda9FFQffWGXwxy4A8xzuJsNizAv2/NXk7zGExhE/tpGIN5QVBf4v0/frWy0f77lsWLmzdrG5tr/3qxufK77fpvX3669Mulf176emlj6d+Wfrd0uHS+dL0UvBi++MOL/3rx35v9zS+b/7H5n6bpj17Uff5pyfnZ/OP/AsWXYDg=</latexit>
Figure 1: We partition datasets into four sections, defined by the features (spurious and/or target) that hold. We sample training datasets D, which provide varying amounts of evidence against the spurious feature, in the form of s-only examples. In the illustration above, the s-only rate is 210 = 0.2, i.e., 20% of examples in D provide evidence that s alone should not be used to predict y.
Evidence from Spurious-Only Examples. We are interested in spurious features which are highly correlated with the target during training. Given a training sample D and features s and t, we define the s-only example rate as the evidence against the use of s as a predictor of y. Concretely, s-only rate = |Ds-only| / |D|, the proportion of training examples in which s occurs without t (and y = 0).
Use of Spurious Feature. If a model has falsely learned that the spurious feature s alone is predictive of the label, it will have a high error rate when classifying examples for which s holds but t does not. We define the s-only error to be the classifier’s error on examples from Ss-only. When relevant, t-only error, both error, and neither error are defined analogously. In this work, “feature use” is a model’s predictions consistency with that feature; we are not making a causal argument.
Extractability of a Feature. We want to compare features in terms of how extractable they are given a representation. For example, given a sentence embedding, it may be possible to predict multiple features with high accuracy, e.g., whether the word “dog” occurs, and also whether the word “dog” occurs as the subject of the verb “run”. However, detecting the former will no doubt be an easier task than detecting the latter. We use the prequential minimum description length (MDL) Rissanen (1978)–first used by Voita & Titov (2020) for probing–to quantify this intuitive difference.2 MDL is an information-theoretic metric that measures how accurately a feature can be decoded and the amount of effort required to decode it. Formally, MDL measures the number of bits required to communicate the labels given the representations. Conceptually, MDL can be understood as a measure of the area under the loss curve: If a feature is highly extractable, a model trained to detect that feature will converge quickly to high accuracy, resulting in a low MDL. Computing MDL requires repeatedly training a model over a dataset labeled by the feature in question. To compute MDL(s), we train a classifier (without freezing any parameters) to differentiate Ss-only vs. Sneither, and similarly compute MDL(t). See Voita & Titov (2020) for additional details on MDL.3
1Without loss of generality, we define t in our datasets s.t. t(x) = y,∀x, y ∈ S. We do this to iron out the case where t outputs the opposite value of y.
2We observe similar overall trends when using an alternative metric based on validation loss (Appendix A.3).
3Note that our reported MDL is higher in some cases than that given by the uniform code (the number of sentences that are being encoded). The MDL is computed as a sum of the costs of transmitting successively
2.3 HYPOTHESIS
Stated using the above-defined terminology, our hypothesis is that a model’s use of the target feature is modulated by two factors: The relative extractability of the target feature t (compared to the spurious feature s), and the evidence from s-only examples provided by the training data. In particular, we expect that higher extractability of t (relative to s), measured by MDL(s)/MDL(t), will yield models that achieve better performance despite less training evidence.
3 EXPERIMENTS WITH SYNTHETIC DATA
Since it is often difficult to fully decouple the target feature from competing spurious features in practice, we first use synthetic data in order to test our hypothesis in a clean setting. We use a simple classifier with an embedding layer, a 1-layer LSTM, and an MLP with 1 hidden layer with tanh activation. We use a synthetic sentence classification task with k-length sequences of numbers as input and binary labels as output. We use a symbolic vocabulary V with the integers 0 . . . |V | − 1. We fix k = 10 and |V | = 50K. We begin with an initial training set of 200K, evenly split between examples from Sboth and Sneither. Then, varied across runs, we manipulate the evidence against the spurious feature (i.e., the s-only rate) by replacing a percentage p of the initial data with examples from Ss-only for p ∈ {0%, 0.1%, 1%, 5%, 10%, 20%, 50%}. Test and validation sets consist of 1,000 examples each from Sboth, Sneither, St-only, Ss-only. In all experiments, we set the spurious feature s to be the presence of the symbol 2. We consider several different target features t (Table 1), intended to vary in their extractability. Table 1 contains MDL metrics for each feature (computed on training sets of 200K, averaged over 3 random seeds). We see some gradation of feature extractability, but having more features with wider variation would help solidify our results.4
Figure 2 shows model performance as a function of s-only rate for each of the four features described above. Here, performance is reported using error rate (lower is better) on each partition (Ss-only, St-only, Sboth, Sneither) separately. We are primarily interested in whether the relative extractability of the target feature (compared to the spurious feature) predicts model performance. We indeed see a fairly clear relationship between the relative extractability (MDL(s) / MDL(t)) and model performance, at every level of training evidence (s-only rate). For example, when t is no less extractable than s (i.e., contains-1), the model achieves zero error at an s-only rate of 0.001, meaning it learns that t alone predicts the label despite having only a handful of examples that support this inference. In contrast, when t is harder to extract than s (e.g., first-last), the model fails to make this inference, even when a large portion of training examples provide evidence supporting it.
4 EXPERIMENTS WITH NATURALISTIC DATA
We investigate whether the same trend holds for language models fine-tuned with naturalistic data, e.g., grammar-generated English sentences. To do this, we fine-tune models for the linguistic acceptability task, a simple sequence classification task as defined in Warstadt & Bowman (2019),
longer blocks, using classifiers that are trained on previously transmitted data. The high MDL’s are a result of overfitting by classifiers that are trained on limited data–and therefore, the classifiers have worse compression performance than the uniform baseline.
4Note, all models are ultimately able to learn to detect t (achieve high test accuracy) on the both partition, but not on the t-only partition.
in which the goal is to differentiate grammatical sentences from ungrammatical ones. We focus on acceptability judgments since formal linguistic theory guides how we define the target features, and recent work in computational linguistics shows that neural language models can be sensitive to spurious features in this task (Marvin & Linzen, 2018; Warstadt et al., 2020a).
4.1 DATA
We design a series of simple natural language grammars that generate a variety of feature pairs (s, t), which we expect will exhibit different levels of relative extractability (MDL(s) / MDL(t)). We focus on three syntactic phenomena (described below). In each case, we consider the target feature t to be whether a given instance of the phenomenon obeys the expected syntactic rules. We then introduce several spurious features s which we deliberately correlate with the positive label during fine-tuning. The Subject-Verb Agreement (SVA) construction requires detecting whether the verb agrees in number with its subject, e.g., “the girls are playing” is acceptable while “the girls is playing” is not. In general, recognizing agreement requires some representation of hierarchical syntax, since subjects may be separated from their verbs by arbitrarily long clauses. We introduce four spurious features: 1) lexical, grammatical sentences begin with specific lexical items (e.g., “often”); 2) length, grammatical sentences are longer; 3) recent-noun, verbs in grammatical sentences agree with the immediately preceding noun (in addition to their subject); and 4) plural, verbs in grammatical sentences are preceded by singular nouns as opposed to plural ones.
The Negative Polarity Items (NPI) construction requires detecting whether a negative polarity item (e.g., “any”, “ever”) is grammatical in a given context, e.g., “no girl ever played” is acceptable while “a girl ever played” is not. In general, NPIs are only licensed in contexts that fall within the scope of a downward entailing operator (such as negation). We again consider four types of spurious features: 1) lexical, in which grammatical sentences always include one of a set of lexical items (“no” and “not”); 2) length (as above); 3) plural, in which each noun in a grammatical sentence is singular, as opposed to plural; and 4) tense, in which grammatical sentences are in present tense.
Some verbs (e.g. “recognize”) require a direct object. However, in the right syntactic contexts (i.e., when in the correct syntactic relation with a wh-word), the object position can be empty, creating what is known as a “gap”. E.g., “I know what you recognized ” is acceptable while “I know that you recognized ” is not. The Filler-Gap Dependencies (GAP) construction requires detecting
whether a sentence containing a gap is grammatical. For our GAP tasks, we again consider four spurious features (lexical, length, plural, and tense), defined similarly to above.
The templates above (and slight variants) result in 20 distinct fine-tuning datasets, over which we perform our analyses (see Appendix for details). Table 2 shows several examples. For the purposes of this paper, we are interested only in the relative extractability of t vs. s given the pre-trained representation; we don’t intend to make general claims about the linguistic phenomena per se. Thus, we do not focus on the details of the features themselves, but rather consider each template as generating one data point, i.e., an (s, t) pair representing a particular level of relative extractability.
4.2 SETUP
We evaluate T5, BERT, RoBERTa, GPT-2 and an LSTM with GloVe embeddings (Raffel et al., 2020; Devlin et al., 2019; Liu et al., 2019b; Radford et al., 2019; Pennington et al., 2014).5 Both T5 and BERT learn to perform well over the whole test set, whereas the GloVe model struggles with many of the tasks. We expect that this is because contextualized pre-training encodes certain syntactic features which let the models better leverage small training sets (Warstadt & Bowman, 2020). Again, we begin with an initial training set of 2000 examples, evenly split between both and neither, and then introduce s-only examples at rates of 0%, 0.1%, 1%, 5%, 10%, 20%, and 50%, using three random seeds each. Test and validation sets consist of 1000 examples each from Sboth, Sneither, Ss-only. In the natural language setting, it is often difficult to generate t-only examples, and thus we cannot compute extractability of the target feature t by training a classifier to distinguish St-only from a random subset of Sneither, as we did in Section 3. Therefore, we estimate MDL by training a classifier to distinguish between examples from Ss-only and examples from Sboth. Using the simulated data from Section 3, we confirm that both methods (Ss-only vs. Sboth and St-only vs. Sneither) produce similar estimates of MDL(t) (see Appendix). Per model, we filter out feature pairs for which the model could not achieve at least 90% accuracy on each probing task in isolation.6
4.3 RESULTS
For each (s, t) feature pair, we plot the use of the spurious feature (s-only error) as a function of the evidence against the spurious feature seen in training (s-only example rate).7 We expect to see the same trend we observed in our synthetic data, i.e., the more extractable the target feature t is relative to the spurious feature s, the less evidence the model will require before preferring t over s. To quantify this trend, we compute correlations between 1) the relative extractability of t compared to s and 2) the test F-score averaged across all rates and partitions of the data8, capturing how readily the model uses (i.e., makes predictions consistent with the use of) the target feature.
5In pilot studies, we found that standard BOW and CNN-based models were unable solve the tasks. 6This control does not impact results: Appendix A.1. 7See Appendix for both error and neither error; both are stable and low in general. 8Initially, we used a more complicated metric based on the s example rate required for the model to solve
the test set. Both report similar trends and correlations. For posterity, we include details in the Appendix A.2.
Figure 3 shows these correlations and associated scatter plots. We can see that relative extractability is strongly correlated with average test F-score (Figure 3a), showing high correlations for both BERT (ρ = 0.79) and T5 (ρ = 0.57). That is, the more extractable t is relative to s, the less evidence the model requires before preferring t, performing better across all partitions. This relationship holds regardless of whether relative extractability is computed using a ratio of MDL scores or an absolute difference. We also see that, in most cases, the relative extractability explains the model’s behavior better than does the extractability of s or t alone. For GloVe there is little variation in model behavior: For most of the 11/20 pairs on which the model is able to learn the task, it requires an s-only example rate of 0.5. Thus, the correlations are weak, but qualitative results appear steady (Figure 8 in Appendix A), following the pattern that when s is easier to extract than t, more evidence is required to stop using s.
Figure 4 shows the performance curves for BERT and T5 (with others the in Appendix A), i.e., use of the spurious feature (s-only error) as a function of the evidence from s-only examples seen in training (s-only example rate). Each line corresponds to a different s, t feature pair, and each data point is the test performance on a dataset with a given s-only example rate (which varies along the x-axis.) For pairs with high MDL ratios (i.e., when t is actually easier to extract than s), the model learns to solve the task “the right way” even when the training data provides no incentive to do so: That is, in such cases, the models’ decisions do not appear to depend on the spurious feature s even when s and the target feature t perfectly co-occur in the fine-tuning data.
Figure 4 shows that T5 (compared to BERT) requires more data to perform well. This may be because we fine-tuned T5 with a linear classification head, rather than the text-only output on which it was pre-trained. We made this decision 1) because we had trouble training T5 in the original manner, and 2) using a linear classification head was consistent with the other model architectures.
5 DISCUSSION
Our experimental results provide support for our hypothesis: The relative extractability of features given an input representation (as measured by information-theoretic probing tech-
niques) is predictive of the decisions a trained model will make in practice. In particular, we see evidence that models will tend to use imperfect features that are more readily extractable over perfectly predictive features that are harder to extract. This insight is highly related to prior work which has shown, e.g., that neural networks learn “easy” examples before they learn “hard” examples (Mangalam & Prabhu, 2019). Our findings additionally connect to new probing techniques which have received significant attention in NLP but have yet to be connected to explanations of or predictions about state-of-the-art models’ decisions in practice.
Fine-tuning may not uncover new features. The models are capable of learning both the s and t features in isolation, so our experiments show that if the relative extractibility is highly skewed, one feature may hide the other – a fine-tuned model may not use the harder-to-extract feature. This suggests a pattern that seems intuitive but is in fact non-trivial: If one classifier does not pick up on a feature readily enough, another classifier (or, rather, the same classifier trained with different data) may not be sensitive to that feature at all. This has ramifications for how we view fine-tuning, which is generally considered to be beneficial because it allows models to learn new, task-relevant features. Our findings suggest that if the needed feature is not already extractable-enough after pretraining, fine-tuning may not have the desired effect.
Probing classifiers can be viewed as measures of a pre-trained representation’s inductive biases. Analysis with probing classifiers has primarily focused on whether important linguistic features can be decoded from representations at better-than-baseline rates, but there has been little insight about what it would mean for a representations’ encoding of a feature to be “sufficient”. Based on these experiments, we argue that a feature is “sufficiently” encoded if it is as available to the model as are surface features of the text. For example, if a fine-tuned model can access features about a word’s semantic role as easily as it can access features about that word’s lexical identity, the model may need little (or no) explicit training signal to prefer a decision rule based on the former structural feature. The desire for models with such behavior motivates the development of architectures with explicit inductive biases (e.g., TreeRNNs). Evidence that similar generalization behavior
can result from pre-trained representations has exciting implications for those interested in sample efficiency and cognitively-plausible language learning (Warstadt & Bowman, 2020; Linzen, 2020). We note that this work has not established that the relationship between extractability and feature use is causal. This could be explored using intermediate task training (Pruksachatkun et al., 2020) in order to influence the extractability of features prior to fine-tuning for the target task; e.g., Merchant et al. (2020) suggests fine-tuning on parsing might improve the extractability of syntactic features.
6 RELATED WORK
Significant prior work analyzes the representations and behavior of pre-trained LMs. Work using probing classifiers (Veldhoen et al., 2016; Adi et al., 2017; Conneau et al., 2018; Hupkes et al., 2018) suggests that such models capture a wide range of relevant linguistic phenomena (Hewitt & Manning, 2019; Bau et al., 2019; Dalvi et al., 2019; Tenney et al., 2019a;b). Similar techniques include attention maps/visualizations (Voita et al., 2019; Serrano & Smith, 2019), and relational similarity analyses (Chrupała & Alishahi, 2019). A parallel line of work uses challenge sets to understand model behavior in practice. Some works construct evaluation sets to analyze weaknesses in the decision procedures of neural NLP models (Jia & Liang, 2017b; Glockner et al., 2018; Dasgupta et al., 2018; Gururangan et al., 2018; Poliak et al., 2018b; Elkahky et al., 2018; Ettinger et al., 2016; Linzen et al., 2016; Isabelle et al., 2017; Naik et al., 2018; Jia & Liang, 2017a; Linzen et al., 2016; Goldberg, 2019, and others). Others use such datasets to improve models’ handling of linguistic features (Min et al., 2020; Poliak et al., 2018a; Liu et al., 2019a), or to mitigate biases (Zmigrod et al., 2019; Zhao et al., 2018; 2019; Hall Maudslay et al., 2019; Lu et al., 2020). Nie et al. (2020) and Kaushik et al. (2020) explore augmenting training sets with human-in-the-loop methods.
Our work is related to work on generalization of neural NLP models. Geiger et al. (2019) discusses ways in which evaluation tasks should be sensitive to models’ inductive biases and Warstadt & Bowman (2020) discusses the ability of language model pre-training to encode such inductive biases. Work on data augmentation (Elkahky et al., 2018; Min et al., 2020; Zmigrod et al., 2019) is relevant, as the approach relies on the assumption that altering the training data distribution (analogous to what we call s-only rate in our work) will improve model behavior in practice. Kodner & Gupta (2020); Jha et al. (2020) discuss concerns about ways in which such approaches can be counterproductive, by introducing new artifacts. Work on adversarial robustness (Ribeiro et al., 2018; Iyyer et al., 2018; Hsieh et al., 2019; Jia et al., 2019; Alzantot et al., 2018; Hsieh et al., 2019; Ilyas et al., 2019; Madry et al., 2017; Athalye et al., 2018) is also relevant, as it relates to the influence of dataset artifacts on models’ decisions. A still larger body of work studies feature representation and generalization in neural networks outside of NLP. Mangalam & Prabhu (2019) show that neural networks learn “easy” examples (as defined by shallow machine learning model performance) before they learn “hard” examples. Zhang et al. (2016) and Arpit et al. (2017) show that neural networks which are capable of memorizing noise nonetheless acheive good generalization performance, suggesting that such models might have an inherent preference to learn more general features. Finally, ongoing theoretical work characterizes the ability of over-parameterized networks to generalize in terms of complexity (Neyshabur et al., 2019) and implicit regularization (Blanc et al., 2020).
Concurrent work (Warstadt et al., 2020b) also investigates the inductive biases of large pre-trained models (RoBERTa), in particular, they ask when (at what amount of pre-training data) such models shift from a surface feature (what we call spurious features) to a linguistic feature (what we call a target feature). In our work, we focus on how to predict which of these two biases characterize the model (via relative MDL).
7 CONCLUSION
This work bears on an open question in NLP, namely, the question of how models’ internal representations (as measured by probing classifiers) influence model behavior (as measured by challenge sets). We find that the feature extractability can be viewed as an inductive bias: the more extractable a feature is after pre-training, the less statistical evidence is required in order for the model to adopt the feature during fine-tuning. Understanding the connection between these two measurement techniques can enable more principled evaluation of and control over neural NLP models.
ACKNOWLEDGEMENTS
We would like to thank Michael Littman for helpful suggestions on how to better present our findings and Ian Tenney for insightful comments on a previous draft of this work. We also want to thank our reviewers were their detailed and helpful comments. This work is supported by DARPA under grant number HR00111990064. This research was conducted using computational resources and services at the Center for Computation and Visualization, Brown University.
A ADDITIONAL RESULTS
Figure 6, 7, 8, 9, 10 show additional results for all models over all partitions (both accuracy, neither accuracy, and F-score). These charts appear at the end of the Appendix.
Details on the MDL statistics are available in Table 3.
A.1 BEYOND ACCURACY?
For the transformer models, for 18/20 feature pairs, the models are able to solve all the spurious and target features in isolation during probing. (They do solve the test set in all cases–its that two
of the spurious features ended up being very difficult for the models.) During the reviews, we did not control for the cases where the model did not solve the probing task. These 2 extraneous points accentuate the lineplot curves, but do not change the character of the results (nor much adjust the correlations). In the paper, now, we control for accuracy by filtering out these cases. With or without this control, the accuracy provides no predictive power about the inductive biases. We present the correlations without filtering for these cases for consistency with the reviews (Table 4 above); we believe it is important to control for these cases because they could have acted as giveaways, where even accuracy might have worked.
A.2 ALTERNATE METRIC: s-RATE?
We initially used a different metric when computing the correlations to compact the lineplots. Rather than using the average test performance, we looked at the evidence required for the model to solve the test set. Both of these metrics conceptually capture what we are interested in, but the new one (simply averaging test performance) is much easier to understand, and captures the performance across all partitions. Here we report the correlations with this evidence required metric instead, which we called s-rate?. Specifically, we defined it to be: s-rate? is the lowest s-only example rate at which the fine-tuned model achieves essentially perfect performance (F-score> 0.99) (see Figure 5a). Intuitively, s-rate? is the (observed) minimum amount of evidence from which the model infer that t alone is predictive of the label. See Table 5b for the results.
s-rate★ Threshold: 0.99
(a) Evidence Required: s-rate?
Absolute Relative (t to s) Target Spurious Ratio Diff
BERT 0.68* -0.63* -0.79* -0.81* RoBERTa 0.04 -0.69* -0.71* -0.69* T5 0.81* -0.03 -0.55* -0.65* GPT2 -0.11 -0.24 -0.29 -0.32 GloVe 0.29 -0.38 -0.48 -0.48
(b) Using Evidence Required (s-rate?) instead of Average Fscore. The correlations are negative instead of positive: As the extractibility increases, less evidence is required for the model to perform well.
A.3 RESULTS USING AUC INSTEAD OF MDL
See Table 5. A metric similar to MDL for capturing the same intuition is the area under the validation loss curve (AUC). This metric is highly related to online MDL in computation.
B IMPLEMENTATION DETAILS & REPRODUCIBLITY
Our code is available at: https://github.com/cjlovering/predicting-inductive-biases.
There are two major parts to this project in terms of reproducibility: (1) the data and (2) the model implementations. We describe the templates for the data below in Appendix D – the full details are in the project source. For the transformer models, we use Hugging Face for the implementations and access to the pre-trained embeddings (Wolf et al., 2020). We use PyTorch Lightning to organize the training code (Falcon, 2019). We fix all hyperparameters, which are reported in Table 6.
We want to call attention to BERT requiring much less data than T5 to capture our target features. At face value, it seems that BERT requires much less data than T5 to capture our target features. However, we are wary about making such strong claims. Something to consider here (noted in the Appendix B is that for T5 we used a linear model rather than formatting the task in text (which is how T5 is trained). We made this decision (1) because we had trouble training T5 in this purely textual manner, and (2) using a linear classification head over two classes is consistent with the other model architectures. Again, GPT2 and RoBERTa performed on par with BERT, so the difference between the performance of BERT and T5 may be due to how we trained T5.
C MEASURING EXTRACTABILITY INDIRECTLY
We measure the MDL for t with both and s-only examples. In the simulated setting we can compare this approach with measuring the MDL directly (t-only vs neither). See Table 7 for MDL results. The ordering of the feature’s difficulty holds across the two methods.
D TEMPLATES FOR NATURALISTIC DATA
Each template corresponds to a combination of target features, grammars, and spurious features (the target and spurious features are discussed in Section 4.1). See Table 8 for a complete list of templates. See Table 10 for further details about the templates that are used for each of the the target features. Complete details about implementation of these templates (and all data) will be released upon acceptance.
E WHY EXACTLY IS IT HARD TO GENERATE T-ONLY EXAMPLES?
Target features may be unavoidably linked to spurious ones. For example, for a Negative Polarity Item to be licensed (perhaps smoothing over some intricacies) the NPI (“any”, “all”, etc) must be a downward entailing context. These downward entailing contexts are created by triggers, e.g., if a negative word like “no” or “not” or a quantifier like “some”. Linguists who study the problem have assembled a list of such triggers (see Hoeksema (2008)). Arguably, one cannot write down a correct example of NLP licensing that doesn’t contain one of these memorizable triggers. Thus, we cannot train or test models on correct examples of NPI usage while simultaneously preventing it from having access to trigger-specific features.
Similar to the NPI example, it’s not possible (to our knowledge) to construct target-only examples for filler-gap since construction requires a wh-word and syntactic gap; thus, we can’t create a positively labeled, grammatical sentence that exhibits a Filler Gap without these elements.
In summary, target-only examples may add new spurious features (as with NPI), or be impossible to construct because the presence of the target feature implies the presence of the spurious feature (as with filler gaps). Still, our setup permits the MDL to be computed directly with target-only examples, and so, in cases where it is feasible to create target-only examples (e.g. the Subject-Verb Agreement templates), it would have bolstered our argument to do so.
F MDL ISSUES: OVERFITTING IN THE SYNTHETIC EXPERIMENTS
We found that the MDL exceeds the uniform code length in some of the synthetic experiments. We found that this occurs because the model overfits on the small early-block sizes. See Figure 11. | 1. What is the main contribution of the paper, and how does it address the seeming contradiction between encoding linguistic information and not using it during fine-tuning?
2. How does the author explain the hypothesis that a model's use of a given feature can be explained by extractability and predictive reliability?
3. What are the strengths and weaknesses of the paper regarding its clarity, intuition, and contributions?
4. Do you have any concerns or suggestions regarding the measurement of extractability and its relation to classification performance?
5. Are there any issues with the presentation of information in figures 2 and 3?
6. Is there anything else that the reviewer would like to mention or suggest for improvement? | Review | Review
After reading author responses:
Thank you to the authors for your detailed responses. With regard to the highlighted implication that "the harder feature can be obscured completely by a spurious one; i.e., there are settings in which the model just won't adopt the harder feature at all" -- to clarify, while my phrasing may not have made this apparent, I was assuming this implication in my interpretation of the results. So my impression of the finding is not changed substantially by the author response. However, I do want to give appropriate acknowledgment of the value of explicitly testing/confirming intuitive explanations of model behaviors, and it is clear that other reviewers find value in the contribution, so I am bumping my score up a bit.
This paper addresses a seeming contradiction between findings that indicate encoding of linguistic information in models' internal representations, and findings that show models not to use more sophisticated linguistic information during fine-tuning on downstream tasks. The paper hypothesizes that a model's use of a given feature can be explained as a function of extractability of the feature in combination with the amount of evidence for that feature's predictive reliability. The authors test on toy, non-language data as well as natural language data, and find support for their hypothesis.
All in all I think this is a reasonably clear and well-written paper, with a concrete and intuitive hypothesis. My main concern is that the motivating issue is a bit of a strawman, in that the posited explanation was fairly obvious as a means of reconciling the "contradiction" raised at the start of the paper. I can't speak for the rest of the community, and it may be that this is something that people have found puzzling -- but speaking for myself I can say that I haven't at any point considered the highlighted "contradiction" to be a contradiction, having simply assumed something like the explanation hypothesized in this paper. Now, there is of course value in providing concrete evidence supporting intuitive assumptions made by the community. However, as the authors point out, related intuitions have already been supported by, e.g., evidence that models will more readily pick up on "easy" examples over "difficult" examples. So it's not clear to me that the paper is making a sufficiently novel, surprising contribution at present.
I think one way in which these findings would be more compelling would be if the measure of extractability were defined independently of empirical classifier sensitivity. As it is, the experiments are seemingly demonstrating that the more readily a classifier is able to pick up on a given feature, the more readily another classifier will use that feature during learning. I have to assume that this will strike most readers as obvious. However, if extractability/MDL were measured independently of classification performance, then we would presumably learn some interesting and valuable things about what determines extractability for these models.
Smaller notes:
The two sets of experiments are described as "synthetic" versus "natural language" -- but if I'm understanding correctly, the natural language examples are generated synthetically. If this is correct, then the current framing of the distinction is misleading.
Figure 2 is difficult to interpret, and the placement of the legend is odd-looking and confusing. Fig 3 is also pretty difficult to extract information from -- generally presentation of information could be made clearer for the reader.
The wording on p3 can be taken to imply that MDL was introduced by Voita & Titov (2020). I would recommend rephrasing and/or also citing earlier MDL references. |
ICLR | Title
Predicting Inductive Biases of Pre-Trained Models
Abstract
Most current NLP systems are based on a pre-train-then-fine-tune paradigm, in which a large neural network is first trained in a self-supervised way designed to encourage the network to extract broadly-useful linguistic features, and then finetuned for a specific task of interest. Recent work attempts to understand why this recipe works and explain when it fails. Currently, such analyses have produced two sets of apparently-contradictory results. Work that analyzes the representations that result from pre-training (via “probing classifiers”) finds evidence that rich features of linguistic structure can be decoded with high accuracy, but work that analyzes model behavior after fine-tuning (via “challenge sets”) indicates that decisions are often not based on such structure but rather on spurious heuristics specific to the training set. In this work, we test the hypothesis that the extent to which a feature influences a model’s decisions can be predicted using a combination of two factors: The feature’s extractability after pre-training (measured using information-theoretic probing techniques), and the evidence available during finetuning (defined as the feature’s co-occurrence rate with the label). In experiments with both synthetic and naturalistic data, we find strong evidence (statistically significant correlations) supporting this hypothesis.
1 INTRODUCTION
Large pre-trained language models (LMs) (Devlin et al., 2019; Raffel et al., 2020; Brown et al., 2020) have demonstrated impressive empirical success on a range of benchmark NLP tasks. However, analyses have shown that such models are easily fooled when tested on distributions that differ from those they were trained on, suggesting they are often “right for the wrong reasons” (McCoy et al., 2019). Recent research which attempts to understand why such models behave in this way has primarily made use of two analysis techniques: probing classifiers (Adi et al., 2017; Hupkes et al., 2018), which measure whether or not a given feature is encoded by a representation, and challenge sets (Cooper et al., 1996; Linzen et al., 2016; Rudinger et al., 2018), which measure whether model behavior in practice is consistent with use of a given feature. The results obtained via these two techniques currently suggest different conclusions about how well pre-trained representations encode language. Work based on probing classifiers has consistently found evidence that models contain rich information about syntactic structure (Hewitt & Manning, 2019; Bau et al., 2019; Tenney et al., 2019a), while work using challenge sets has frequently revealed that models built on top of these representations do not behave as though they have access to such rich features, rather they fail in trivial ways (Dasgupta et al., 2018; Glockner et al., 2018; Naik et al., 2018).
In this work, we attempt to link these two contrasting views of feature representations. We assume the standard recipe in NLP, in which linguistic representations are first derived from large-scale selfsupervised pre-training intended to encode broadly-useful linguistic features, and then are adapted for a task of interest via transfer learning, or fine-tuning, on a task-specific dataset. We test the
hypothesis that the extent to which a fine-tuned model uses a given feature can be explained as a function of two metrics: The extractability of the feature after pre-training (as measured by probing classifiers) and the evidence available during fine-tuning (defined as the rate of co-occurrence with the label). We first show results on a synthetic task, and second using state-of-the-art pre-trained LMs on language data. Our results suggest that probing classifiers can be viewed as a measure of the pre-trained representation’s inductive biases: The more extractable a feature is after pre-training, the less statistical evidence is required in order for the model to adopt the feature during fine-tuning.
Contribution. This work establishes a relationship between two widely-used techniques for analyzing LMs. Currently, the question of how models’ internal representations (measured by probing classifiers) influence model behavior (measured by challenge sets) remains open (Belinkov & Glass, 2019; Belinkov et al., 2020). Understanding the connection between these two measurement techniques can enable more principled evaluation of and control over neural NLP models.
2 SETUP AND TERMINOLOGY
2.1 FORMULATION
Our motivation comes from McCoy et al. (2019), which demonstrated that, when fine-tuned on a natural language inference task (Williams et al., 2018, MNLI), a model based on a state-of-the-art pre-trained LM (Devlin et al., 2019, BERT) categorically fails on test examples which defy the expectation of a “lexical overlap heuristic”. For example, the model assumes that the sentence “the lawyer followed the judge” entails “the judge followed the lawyer” purely because all the words in the latter appear in the former. While this heuristic is statistically favorable given the model’s training data, it is not infallible. Specifically, McCoy et al. (2019) report that 90% of the training examples containing lexical overlap had the label “entailment”, but the remaining 10% did not. Moreover, the results of recent studies based on probing classifiers suggest that more robust features are extractable with high reliability from BERT representations. For example, given the example “the lawyer followed the judge”/“the judge followed the lawyer”, if the model can represent that “lawyer” is the agent of “follow” in the first sentence, but is the patient in the second, then the model should conclude that the sentences have different meanings. Such semantic role information can be recovered at > 90% accuracy from BERT embeddings (Tenney et al., 2019b). Thus, the question is: Why would a model prefer a weak feature over a stronger one, if both features are extractable from the model’s representations and justified by the model’s training data?
Abstracting over details, we distill the basic NLP task setting described above into the following, to be formalized in the Section 2.2. We assume a binary sequence classification task where a target feature t perfectly predicts the label (e.g., the label is 1 iff t holds). Here, t represents features which actually determine the label by definition, e.g., whether one sentence semantically entails another. Additionally, there exists a spurious feature s that frequently co-occurs with t in training but is not guaranteed to generalize outside of the training set. Here, s (often called a “heuristic” or “bias” elsewhere in the literature) corresponds to features like lexical overlap, which are predictive of the label in some datasets but are not guaranteed to generalize.
Assumptions. In this work, we assume there is a single t and a single s; in practice there may be many s features. Still, our definition of a feature accommodates multiple spurious or target features. In fact, some of our spurious features already encompass multiple features: the lexical feature, for example, is a combination of several individual-word features because it holds if one of a set of words is in the sentence. This type of spurious feature is common in real datasets: E.g., the hypothesis-only baseline in NLI is a disjunction of lexical features (with semantically unrelated words like “no”, “sleeping”, etc.) (Poliak et al., 2018b; Gururangan et al., 2018).
We assume that s and t frequently co-occur, but that only s occurs in isolation. This assumption reflects realistic NLP task settings since datasets always contain some heuristics, e.g., lexical cues, cultural biases, or artifacts from crowdsourcing (Gururangan et al., 2018). Thus, our experiments focus on manipulating the occurrence of s alone, but not t alone: This means giving the model evidence against relying on s. This is in line with prior applied work that attempts to influence model behavior by increasing the evidence against s during training (Elkahky et al., 2018; Zmigrod et al., 2019; Min et al., 2020).
Published as a conference paper at ICLR 2021
2.2 DEFINITIONS
Let X be the set of all sentences and S be the space of all sentence-label pairs (x, y) ∈ X × {0, 1}. We use D ⊂ S to denote a particular training sample drawn from S. We define two types of binary features: target (t) and spurious (s). Each is a function from sentences x ∈ X to a binary label {0, 1} that indicates whether the feature holds.
Target and spurious features. The target feature t is such that there exists some function f : {0, 1} → {0, 1} such that ∀(x, y) ∈ S, f(t(x)) = y. In other words, the label can always be perfectly predicted given the value of t.1 A feature s is spurious if it is not a target feature.
Partitions of S. To facilitate analysis, we partition S in four regions (Figure 1). We define Ss-only to be the set of examples in which the spurious feature occurs alone (without the target). Similarly, St-only is the set of examples in which the target occurs without the spurious feature. Sboth and Sneither are analogous. For clarity, we sometimes drop the S∗ notation (e.g., s-only in place of Ss-only).
t-only both s-only
neither
S D
(a) (b)
Sboth “ tpx, yq | tpxq “ 1 ^ spxq “ 1u Sneither “ tpx, yq | tpxq “ 0 ^ spxq “ 0u St-only “ tpx, yq | tpxq “ 1 ^ spxq “ 0u Ss-only “ tpx, yq | tpxq “ 0 ^ spxq “ 1u
<latexit sha1_base64="bSiqwQANSN306jL/eI6aFqdfpQw=">AAA00nicnVtbc9zKcaYcx3EYxzlOHv2CCiX7nBTFInWYil9cZd5JcXlfXqSzOiosthcLETdhZrGk4E1VKi9+8Evyg/I//F/8kJ4ZDLoHC/LEYZVE9PfNNOby9XQvFhzmcSTk+vqfXvzor3781z/5m5/+7fLf/ezvf/4PX/3iH29ENi0CuA6yOCvuhr6AOErhWkYyhru8AD8ZxnA7vN9R/G0JhYiytC8fc/iQ+GEajaPAlwh9/OrPHv5cfawGEh5kkVQvh5mcvJzPf/XbQfX1w+rjN4PV3w9W5dcP3/x2YzCDUQieMMbcGwyW271TiOQEim4H69zBercD+fJ1lsaPPzyAJ/qLZ/qvtyaw/PGrlfW1df3jLV5s1BcrS/XP+cdfvPmfwSgLpgmkMoh9Ib7bWM/lh8ovZBTEMF8eTAXkfnDvh/AdXqZ+AuJDpbdp7r1CZOSNswL/pdLTKO9R+YkQj8kQWya+nIg2p8Au7rupHP/mQxWl+VRCGpgbjaexJzNP7bk3igoIZPyIF35QRDhWL5j4hR9IVIZzl2GW3Ut/KNCHt68HWiQ+OvJRT25LNQqZZbFuGqACxfICX4ixmHfO0AHFdDiOwrnnzCnVd4bkQzXF/7s6rHrjOPPlfOG2/vBhNSz8fBIFDy4rs1GWZhJao5phYzWCp2bQuRFtUO3p/2W2z4/w/+N9efmV+vFO9269k63+obe7t390etQ/Oju98jTVOYNV63Z1iOu7/Mo78Yt7T6BS8GQQXjbGbc3NtdJsAWMoiigNlaxGURkJ2wyXblooeaQwC7Ik8dNRNUAwhrGcV9UAEu/rHl5/M58vtAkwkqCwrXa01dWuiMJJ4+xSGV2tZJbbNv0s72qBx5vMEttoW1sL7ep5+7aZ/1SLoW0xfKpFYFsET7UY2RYj1QK34RBnF6sZer6H7VXYwhiP95GHa5O4PvBagfPvNj6gl+HYW9mYa0F4+3pTzK7hoQCrXpzNoHitQnVteYAu9bLCeGWjMhv47wO0Ku2gqzsON5J+vKbPBSHxyFN7L9SOIW887luP+22PmpazzN5z5U19V+HZRh7OqDbe2B6fp/6Iuqx8u7K50G216WOvvuWuNvV0royqn10OVL4ZfB0Cznp0OLALYnpf2d5XHb0vbS99JM+yJsrWmoUxdxd6ZZoYfGJp2g4nBUDbJfO38u2iR1o15vvbRd9+6gFugurcsWTw2czZNnl60o6faZ5DoZOGcbNXu9nrcrPlFf6M1r3l7PXr136ZRSNvKtTRFI29PBMiwnRlXOexj6FT+396dCof5hhJHXNUjOlet/mLJ1k72mkc7fygI5xzipWKOoNNW2F8aLgZEUrF0tbV69dPygRH58dhhvl/knTMEzkzuqbRsxNlrhZmumVdbXW4soK398NJNL6ePwz6TqetH+y0sKg5Hlr1zJn6FGqGq66e2xTTv63e86b/udvfzrS5AY5aXT854FpwEMVKrLG6wAMdG6ir2h/WPFmhaX1leH1ZN0BqmFQb7WwjCwyEeTVQmT/w42q33aD042jEG3w011hVG2q+4BKE7O6gmXkzI8iFynG5iOIsrfPTJbrIEq/0i6guLrW+QfqVqeZNAVi9HCD0cm6Xs2jRPjFDlxkSE7hMQMzIZUbEgMsAMWOXGRMTukxIzMRlJsRELhMR88llPhFz7zL3xMQuE8+1jIvEiwRGLH4wHD2qw87s4Kr3aSqkhyXxr6WnPqqgHB/VyeNsjJfUvlPXd0p3zVwmIyZ3mZyYzy7zmZjCZQpihMsIYqTLSGKmLjMlpnSZkpiZy8yIeXCZB2IeXeaRmC8u82VuyjwbAJiZs+Z4L+sgqUwoDccsbJpx4ydsHSW2hbYZzzgODwlmsVEGBLPAKEcEs6gogWAWEuWYYBYPZUgwC4ZyQjCLhHJKMAuD8hPBLAbKe4JZAJQxwTGDE4ITBrOF5iucEczEXOYEMyWXnwlmMi4LgpmGS0Gw4JtKsOxeEy7dkmCm23JGMBNt+UAwU2z5SDCTa/mFYKvVPfwIjoW+/rRXdOgWjOg6z2Uwyus8mcHIr/NsBqPBztMZjBA7z2cwauw8ocFIsvOMBqPLzlMauSfPaTAK7Typwci086wGo9X2aW25xOUSzj15EoORbudZDEa/nacxGBF3nsdglNx5IoORc+eZDEbTnacyGGF3nstg1N15MoOReOfZDEbnnaczGLF3ns9gFP/0CY2xUERBU6EkWxQfWxQ2yTbB2wzeIXiHwbsE7zJ4j+A9Bu8TvM/gA4IPGHxI8CGDjwg+YvBbgt8y+JjgYwb3CO4x+ITgEwafEnzK4DOCzxh8TvA5gy8IvmDwJcGXDL4i+IrBfYL7DL4m+JrBNwTfMPiW4FsG3xF8x+B3BL9j8HuC3z99vLqiA6M6ptEtpl8tPcZtc27H5XY4t+tyu5zbc7k9zu273D7nDlzugHOHLnfIuSOXO+LcW5d7y7ljlzvmXM/lepw7cbkTzp263CnnzlzujHPnLnfOuQuXu+Dcpctdcu7K5a4413e5PueuXe6aczcud8O5W5e75dydy91x7p3LvePce5ezsr/hJUT5BfTnCPzsut70LbMUKvt51mLJ1ECDhJJGUxMr3K2HyxpmyNAgVIfoKgQRqj507YEI1RxlPRKqNHSdgQjVF7q6QISqCl1TIEK1hK4kEKEKQtcPiFDdoKsGRKha0LUCIjFbB4NQZaDrAkRStn4GoSpA1wCIUO7XmR8Ryvg63yNCeV5neUQEW3CDUE4v621hm1IahPK3zt6IUNbWORsRytU6UyNCGVrnZ0S6qlG3DC39OJ+o/da/GwWWw1ocWhcWpI9a9GSipmI/GY5UD3NBRJZAqHD9m2AtSSVHC6BDRPB/gkQUJqqr/k2wFW4t2mYiVcXHXymxWgvFGpCFQh2xSVVKoNZCgY7JQnGGZKEwJ2ThcNlYUZCfyEIx3rO1qZQIm5lXSoDWwsVkq4jiy9iSVEp01kLRfSYLBVewlaqU0JoFqpTIrIULzZYZBVaSheKakYXCeiALRfVIFgrqy7z+zgvz7IPBdY5FnVFu1ZkVEcqoOp8iQnlUZ1FEKHvq3IkI5UydMRGhTKnzJCKUH3V2RISyos6JiFAu1JkQEcqAOv8hQnlPZz1EKNvpXIcI5Tid4RChzKbzGiKUz3Q2Q4SymM5hiFDu0pkLEcpYOl8hQnlKZylEKDvp3IQI5SSdkRChTKTzECKUf3T2QYSyjs45iFCu0ZkGkfdsBykvDHlaSM4n9UE8wCu2ejb0FdOrw7+ZXB3DirsycaxV1IdUqO/ydyGI/QJQVJMtdQLhHU2xJ8aRelQKaZCNojREZ/40VogYN9fJvBLqKe8VyKccDLN49ENuhg9zDML2k9pU6O8ITd6s/emn1PXUpKkvU8HUL7ctRvqXOxajCJC7FqMYkHsWoyiQ+xajOJAHFqNIkIcWo1iQRxajaJBvLUbxII8tRhEhexajmJAnFqOokKcWo7iQZxajyJDnFqPYkBcWo+iQlxaj+JBXFqMIkX2LUYzIa4tRlMgbi1GcyFuLUaTIO4tRrMh3FqNoke8tZioyFPKBeinBsKH9nBs4HzfCbQaTLsIdBpM0wl0GkzrCPQaTQMJ9BpNGwgMGk0zCQwaTUsIjBpNYwrcMJr2ExwwmyYQ9BpNqwhMGk3DCUwaTdsIzBpN8wnMGk4LCCwaTiMJLBpOOwisGk5TCPoNJTeE1g0lQ4Q2DSVPhLYNJVuEdg0lZ4TsGk7jC9wy2FT8ebXWpJpqnKEMmLrFNKGlL7BBK0hK7hGplvfJ29TcZUwGe7wmQHt46hpG3t+oNIfAVLieR8GbZNB4hhBZ4Qn/vgbXktPDU6zRZjI7U+y7wkGNtqb/MtV+p79MdSZ3igFASpzgklLQpjgglaYq3hJIyxTGhJEzRI5R0KU4IJVmKU0JJleKMUBKlOCeUNCkuCCVJiktCSZHiilASpOgTSnoU14SSHMUNoaRGcUsoiVHcEUpaFO8IJSmK94Q2j1xSrPtAf4TwzcOWuggEqgB6bvGvysMtslCq22ShRHfIQmnukoWH3R5ZKKJ9slA8B2ShaA7JQrEckYUieUsWiuOYLBRFjywUwwlZKIJTsnDzz8jCTT8nCzf7gizc5EuycHOvyMJN7ZOFm3lNFm7iDVm4ebdk4abdkYWb9Y4s3KT37H51pVVXWWrLgG+ZNBUXHikqfvX7lBjEBl31ZpGcZFPpYbnjqZf7cijcggioInKqofr2stGAbrhQCIIul6BVL4EumKBVMYEumaBVM4EumqBVNYEum6BVN4EunKBVOYEunaBVO4EunqBVPYEun6BVP4EuoKBVQYEuoaBVQ4EuoqBVRYEuo6BVR4EupKBVSYEupaBVS4EupqBVTYEup6BVT4EuqKBVUYEuqaBVU4EuqqBVVYEuq6BVV4EurKBVWYEuraBVW4EurqBVXYEur6BVX4EusIBVWPhJAVOOLKbgTdMRFPGjemlp5EvfCyGFArONsiOBSh9OVepxZZurpvMq/1gNiqTShk58yiskeVREmPKc/s27g8NHne70ayDqJpgfW77tGyITX+IndfcWTstz3vJ83jWYJBtB/NxEdINmJsZauE/d6Py5RrmM4hHULQfaaEbf9MBjQmbBxBfq1Wd/KjP9CQoKZ4StF1hz06YZY91lcQAjcNoZs6NdgQQeOradMVELgX6G5jaO/Tz2A5g3b9T0akC9bV1fu8vr9t+bNxlvrz2QnmAv7fTa7OWc5/bWqZnkbI1bZFzM7XM3lyggnDdP0tpUIGmOyorGERRt1yIby8R/oJYWaLfDZJHpl5jMQ7ZFL3k8VbP/op4EuOxxb87fYDruLWzgjV/QCJTR9i/xl1/g3hcZa3m1sAE7WUm0MpRAb7N4XPiJeiA1mWUFFqjCfxTey973b16qt3f0nw1MU/MWqshx/4V+e+zlAOKYtbEPRF9525gAMeRT9d8jxjsk6i02VQUbp6y1eoU0m4Y6Z+qiOJKwqt2LzBtloNzNovsoh1Hkr7VeQc6KJFYP7+dV7/v1eQeZpaC4jS5OznS/N11crpi8g9Fa6H0/iNKxfGyHTu4X6uEwHhu+CpYrwLNW+CF4UeqlWV3QS3hY83YmmVDLk6kCMJh4u/jZN4VfC0/9FcTasvM45yxXp3NW/AtqvAj1APD3YFVdPddQnZOmIV51u9RqxWb6/yda9FFQffWGXwxy4A8xzuJsNizAv2/NXk7zGExhE/tpGIN5QVBf4v0/frWy0f77lsWLmzdrG5tr/3qxufK77fpvX3669Mulf176emlj6d+Wfrd0uHS+dL0UvBi++MOL/3rx35v9zS+b/7H5n6bpj17Uff5pyfnZ/OP/AsWXYDg=</latexit>
Figure 1: We partition datasets into four sections, defined by the features (spurious and/or target) that hold. We sample training datasets D, which provide varying amounts of evidence against the spurious feature, in the form of s-only examples. In the illustration above, the s-only rate is 210 = 0.2, i.e., 20% of examples in D provide evidence that s alone should not be used to predict y.
Evidence from Spurious-Only Examples. We are interested in spurious features which are highly correlated with the target during training. Given a training sample D and features s and t, we define the s-only example rate as the evidence against the use of s as a predictor of y. Concretely, s-only rate = |Ds-only| / |D|, the proportion of training examples in which s occurs without t (and y = 0).
Use of Spurious Feature. If a model has falsely learned that the spurious feature s alone is predictive of the label, it will have a high error rate when classifying examples for which s holds but t does not. We define the s-only error to be the classifier’s error on examples from Ss-only. When relevant, t-only error, both error, and neither error are defined analogously. In this work, “feature use” is a model’s predictions consistency with that feature; we are not making a causal argument.
Extractability of a Feature. We want to compare features in terms of how extractable they are given a representation. For example, given a sentence embedding, it may be possible to predict multiple features with high accuracy, e.g., whether the word “dog” occurs, and also whether the word “dog” occurs as the subject of the verb “run”. However, detecting the former will no doubt be an easier task than detecting the latter. We use the prequential minimum description length (MDL) Rissanen (1978)–first used by Voita & Titov (2020) for probing–to quantify this intuitive difference.2 MDL is an information-theoretic metric that measures how accurately a feature can be decoded and the amount of effort required to decode it. Formally, MDL measures the number of bits required to communicate the labels given the representations. Conceptually, MDL can be understood as a measure of the area under the loss curve: If a feature is highly extractable, a model trained to detect that feature will converge quickly to high accuracy, resulting in a low MDL. Computing MDL requires repeatedly training a model over a dataset labeled by the feature in question. To compute MDL(s), we train a classifier (without freezing any parameters) to differentiate Ss-only vs. Sneither, and similarly compute MDL(t). See Voita & Titov (2020) for additional details on MDL.3
1Without loss of generality, we define t in our datasets s.t. t(x) = y,∀x, y ∈ S. We do this to iron out the case where t outputs the opposite value of y.
2We observe similar overall trends when using an alternative metric based on validation loss (Appendix A.3).
3Note that our reported MDL is higher in some cases than that given by the uniform code (the number of sentences that are being encoded). The MDL is computed as a sum of the costs of transmitting successively
2.3 HYPOTHESIS
Stated using the above-defined terminology, our hypothesis is that a model’s use of the target feature is modulated by two factors: The relative extractability of the target feature t (compared to the spurious feature s), and the evidence from s-only examples provided by the training data. In particular, we expect that higher extractability of t (relative to s), measured by MDL(s)/MDL(t), will yield models that achieve better performance despite less training evidence.
3 EXPERIMENTS WITH SYNTHETIC DATA
Since it is often difficult to fully decouple the target feature from competing spurious features in practice, we first use synthetic data in order to test our hypothesis in a clean setting. We use a simple classifier with an embedding layer, a 1-layer LSTM, and an MLP with 1 hidden layer with tanh activation. We use a synthetic sentence classification task with k-length sequences of numbers as input and binary labels as output. We use a symbolic vocabulary V with the integers 0 . . . |V | − 1. We fix k = 10 and |V | = 50K. We begin with an initial training set of 200K, evenly split between examples from Sboth and Sneither. Then, varied across runs, we manipulate the evidence against the spurious feature (i.e., the s-only rate) by replacing a percentage p of the initial data with examples from Ss-only for p ∈ {0%, 0.1%, 1%, 5%, 10%, 20%, 50%}. Test and validation sets consist of 1,000 examples each from Sboth, Sneither, St-only, Ss-only. In all experiments, we set the spurious feature s to be the presence of the symbol 2. We consider several different target features t (Table 1), intended to vary in their extractability. Table 1 contains MDL metrics for each feature (computed on training sets of 200K, averaged over 3 random seeds). We see some gradation of feature extractability, but having more features with wider variation would help solidify our results.4
Figure 2 shows model performance as a function of s-only rate for each of the four features described above. Here, performance is reported using error rate (lower is better) on each partition (Ss-only, St-only, Sboth, Sneither) separately. We are primarily interested in whether the relative extractability of the target feature (compared to the spurious feature) predicts model performance. We indeed see a fairly clear relationship between the relative extractability (MDL(s) / MDL(t)) and model performance, at every level of training evidence (s-only rate). For example, when t is no less extractable than s (i.e., contains-1), the model achieves zero error at an s-only rate of 0.001, meaning it learns that t alone predicts the label despite having only a handful of examples that support this inference. In contrast, when t is harder to extract than s (e.g., first-last), the model fails to make this inference, even when a large portion of training examples provide evidence supporting it.
4 EXPERIMENTS WITH NATURALISTIC DATA
We investigate whether the same trend holds for language models fine-tuned with naturalistic data, e.g., grammar-generated English sentences. To do this, we fine-tune models for the linguistic acceptability task, a simple sequence classification task as defined in Warstadt & Bowman (2019),
longer blocks, using classifiers that are trained on previously transmitted data. The high MDL’s are a result of overfitting by classifiers that are trained on limited data–and therefore, the classifiers have worse compression performance than the uniform baseline.
4Note, all models are ultimately able to learn to detect t (achieve high test accuracy) on the both partition, but not on the t-only partition.
in which the goal is to differentiate grammatical sentences from ungrammatical ones. We focus on acceptability judgments since formal linguistic theory guides how we define the target features, and recent work in computational linguistics shows that neural language models can be sensitive to spurious features in this task (Marvin & Linzen, 2018; Warstadt et al., 2020a).
4.1 DATA
We design a series of simple natural language grammars that generate a variety of feature pairs (s, t), which we expect will exhibit different levels of relative extractability (MDL(s) / MDL(t)). We focus on three syntactic phenomena (described below). In each case, we consider the target feature t to be whether a given instance of the phenomenon obeys the expected syntactic rules. We then introduce several spurious features s which we deliberately correlate with the positive label during fine-tuning. The Subject-Verb Agreement (SVA) construction requires detecting whether the verb agrees in number with its subject, e.g., “the girls are playing” is acceptable while “the girls is playing” is not. In general, recognizing agreement requires some representation of hierarchical syntax, since subjects may be separated from their verbs by arbitrarily long clauses. We introduce four spurious features: 1) lexical, grammatical sentences begin with specific lexical items (e.g., “often”); 2) length, grammatical sentences are longer; 3) recent-noun, verbs in grammatical sentences agree with the immediately preceding noun (in addition to their subject); and 4) plural, verbs in grammatical sentences are preceded by singular nouns as opposed to plural ones.
The Negative Polarity Items (NPI) construction requires detecting whether a negative polarity item (e.g., “any”, “ever”) is grammatical in a given context, e.g., “no girl ever played” is acceptable while “a girl ever played” is not. In general, NPIs are only licensed in contexts that fall within the scope of a downward entailing operator (such as negation). We again consider four types of spurious features: 1) lexical, in which grammatical sentences always include one of a set of lexical items (“no” and “not”); 2) length (as above); 3) plural, in which each noun in a grammatical sentence is singular, as opposed to plural; and 4) tense, in which grammatical sentences are in present tense.
Some verbs (e.g. “recognize”) require a direct object. However, in the right syntactic contexts (i.e., when in the correct syntactic relation with a wh-word), the object position can be empty, creating what is known as a “gap”. E.g., “I know what you recognized ” is acceptable while “I know that you recognized ” is not. The Filler-Gap Dependencies (GAP) construction requires detecting
whether a sentence containing a gap is grammatical. For our GAP tasks, we again consider four spurious features (lexical, length, plural, and tense), defined similarly to above.
The templates above (and slight variants) result in 20 distinct fine-tuning datasets, over which we perform our analyses (see Appendix for details). Table 2 shows several examples. For the purposes of this paper, we are interested only in the relative extractability of t vs. s given the pre-trained representation; we don’t intend to make general claims about the linguistic phenomena per se. Thus, we do not focus on the details of the features themselves, but rather consider each template as generating one data point, i.e., an (s, t) pair representing a particular level of relative extractability.
4.2 SETUP
We evaluate T5, BERT, RoBERTa, GPT-2 and an LSTM with GloVe embeddings (Raffel et al., 2020; Devlin et al., 2019; Liu et al., 2019b; Radford et al., 2019; Pennington et al., 2014).5 Both T5 and BERT learn to perform well over the whole test set, whereas the GloVe model struggles with many of the tasks. We expect that this is because contextualized pre-training encodes certain syntactic features which let the models better leverage small training sets (Warstadt & Bowman, 2020). Again, we begin with an initial training set of 2000 examples, evenly split between both and neither, and then introduce s-only examples at rates of 0%, 0.1%, 1%, 5%, 10%, 20%, and 50%, using three random seeds each. Test and validation sets consist of 1000 examples each from Sboth, Sneither, Ss-only. In the natural language setting, it is often difficult to generate t-only examples, and thus we cannot compute extractability of the target feature t by training a classifier to distinguish St-only from a random subset of Sneither, as we did in Section 3. Therefore, we estimate MDL by training a classifier to distinguish between examples from Ss-only and examples from Sboth. Using the simulated data from Section 3, we confirm that both methods (Ss-only vs. Sboth and St-only vs. Sneither) produce similar estimates of MDL(t) (see Appendix). Per model, we filter out feature pairs for which the model could not achieve at least 90% accuracy on each probing task in isolation.6
4.3 RESULTS
For each (s, t) feature pair, we plot the use of the spurious feature (s-only error) as a function of the evidence against the spurious feature seen in training (s-only example rate).7 We expect to see the same trend we observed in our synthetic data, i.e., the more extractable the target feature t is relative to the spurious feature s, the less evidence the model will require before preferring t over s. To quantify this trend, we compute correlations between 1) the relative extractability of t compared to s and 2) the test F-score averaged across all rates and partitions of the data8, capturing how readily the model uses (i.e., makes predictions consistent with the use of) the target feature.
5In pilot studies, we found that standard BOW and CNN-based models were unable solve the tasks. 6This control does not impact results: Appendix A.1. 7See Appendix for both error and neither error; both are stable and low in general. 8Initially, we used a more complicated metric based on the s example rate required for the model to solve
the test set. Both report similar trends and correlations. For posterity, we include details in the Appendix A.2.
Figure 3 shows these correlations and associated scatter plots. We can see that relative extractability is strongly correlated with average test F-score (Figure 3a), showing high correlations for both BERT (ρ = 0.79) and T5 (ρ = 0.57). That is, the more extractable t is relative to s, the less evidence the model requires before preferring t, performing better across all partitions. This relationship holds regardless of whether relative extractability is computed using a ratio of MDL scores or an absolute difference. We also see that, in most cases, the relative extractability explains the model’s behavior better than does the extractability of s or t alone. For GloVe there is little variation in model behavior: For most of the 11/20 pairs on which the model is able to learn the task, it requires an s-only example rate of 0.5. Thus, the correlations are weak, but qualitative results appear steady (Figure 8 in Appendix A), following the pattern that when s is easier to extract than t, more evidence is required to stop using s.
Figure 4 shows the performance curves for BERT and T5 (with others the in Appendix A), i.e., use of the spurious feature (s-only error) as a function of the evidence from s-only examples seen in training (s-only example rate). Each line corresponds to a different s, t feature pair, and each data point is the test performance on a dataset with a given s-only example rate (which varies along the x-axis.) For pairs with high MDL ratios (i.e., when t is actually easier to extract than s), the model learns to solve the task “the right way” even when the training data provides no incentive to do so: That is, in such cases, the models’ decisions do not appear to depend on the spurious feature s even when s and the target feature t perfectly co-occur in the fine-tuning data.
Figure 4 shows that T5 (compared to BERT) requires more data to perform well. This may be because we fine-tuned T5 with a linear classification head, rather than the text-only output on which it was pre-trained. We made this decision 1) because we had trouble training T5 in the original manner, and 2) using a linear classification head was consistent with the other model architectures.
5 DISCUSSION
Our experimental results provide support for our hypothesis: The relative extractability of features given an input representation (as measured by information-theoretic probing tech-
niques) is predictive of the decisions a trained model will make in practice. In particular, we see evidence that models will tend to use imperfect features that are more readily extractable over perfectly predictive features that are harder to extract. This insight is highly related to prior work which has shown, e.g., that neural networks learn “easy” examples before they learn “hard” examples (Mangalam & Prabhu, 2019). Our findings additionally connect to new probing techniques which have received significant attention in NLP but have yet to be connected to explanations of or predictions about state-of-the-art models’ decisions in practice.
Fine-tuning may not uncover new features. The models are capable of learning both the s and t features in isolation, so our experiments show that if the relative extractibility is highly skewed, one feature may hide the other – a fine-tuned model may not use the harder-to-extract feature. This suggests a pattern that seems intuitive but is in fact non-trivial: If one classifier does not pick up on a feature readily enough, another classifier (or, rather, the same classifier trained with different data) may not be sensitive to that feature at all. This has ramifications for how we view fine-tuning, which is generally considered to be beneficial because it allows models to learn new, task-relevant features. Our findings suggest that if the needed feature is not already extractable-enough after pretraining, fine-tuning may not have the desired effect.
Probing classifiers can be viewed as measures of a pre-trained representation’s inductive biases. Analysis with probing classifiers has primarily focused on whether important linguistic features can be decoded from representations at better-than-baseline rates, but there has been little insight about what it would mean for a representations’ encoding of a feature to be “sufficient”. Based on these experiments, we argue that a feature is “sufficiently” encoded if it is as available to the model as are surface features of the text. For example, if a fine-tuned model can access features about a word’s semantic role as easily as it can access features about that word’s lexical identity, the model may need little (or no) explicit training signal to prefer a decision rule based on the former structural feature. The desire for models with such behavior motivates the development of architectures with explicit inductive biases (e.g., TreeRNNs). Evidence that similar generalization behavior
can result from pre-trained representations has exciting implications for those interested in sample efficiency and cognitively-plausible language learning (Warstadt & Bowman, 2020; Linzen, 2020). We note that this work has not established that the relationship between extractability and feature use is causal. This could be explored using intermediate task training (Pruksachatkun et al., 2020) in order to influence the extractability of features prior to fine-tuning for the target task; e.g., Merchant et al. (2020) suggests fine-tuning on parsing might improve the extractability of syntactic features.
6 RELATED WORK
Significant prior work analyzes the representations and behavior of pre-trained LMs. Work using probing classifiers (Veldhoen et al., 2016; Adi et al., 2017; Conneau et al., 2018; Hupkes et al., 2018) suggests that such models capture a wide range of relevant linguistic phenomena (Hewitt & Manning, 2019; Bau et al., 2019; Dalvi et al., 2019; Tenney et al., 2019a;b). Similar techniques include attention maps/visualizations (Voita et al., 2019; Serrano & Smith, 2019), and relational similarity analyses (Chrupała & Alishahi, 2019). A parallel line of work uses challenge sets to understand model behavior in practice. Some works construct evaluation sets to analyze weaknesses in the decision procedures of neural NLP models (Jia & Liang, 2017b; Glockner et al., 2018; Dasgupta et al., 2018; Gururangan et al., 2018; Poliak et al., 2018b; Elkahky et al., 2018; Ettinger et al., 2016; Linzen et al., 2016; Isabelle et al., 2017; Naik et al., 2018; Jia & Liang, 2017a; Linzen et al., 2016; Goldberg, 2019, and others). Others use such datasets to improve models’ handling of linguistic features (Min et al., 2020; Poliak et al., 2018a; Liu et al., 2019a), or to mitigate biases (Zmigrod et al., 2019; Zhao et al., 2018; 2019; Hall Maudslay et al., 2019; Lu et al., 2020). Nie et al. (2020) and Kaushik et al. (2020) explore augmenting training sets with human-in-the-loop methods.
Our work is related to work on generalization of neural NLP models. Geiger et al. (2019) discusses ways in which evaluation tasks should be sensitive to models’ inductive biases and Warstadt & Bowman (2020) discusses the ability of language model pre-training to encode such inductive biases. Work on data augmentation (Elkahky et al., 2018; Min et al., 2020; Zmigrod et al., 2019) is relevant, as the approach relies on the assumption that altering the training data distribution (analogous to what we call s-only rate in our work) will improve model behavior in practice. Kodner & Gupta (2020); Jha et al. (2020) discuss concerns about ways in which such approaches can be counterproductive, by introducing new artifacts. Work on adversarial robustness (Ribeiro et al., 2018; Iyyer et al., 2018; Hsieh et al., 2019; Jia et al., 2019; Alzantot et al., 2018; Hsieh et al., 2019; Ilyas et al., 2019; Madry et al., 2017; Athalye et al., 2018) is also relevant, as it relates to the influence of dataset artifacts on models’ decisions. A still larger body of work studies feature representation and generalization in neural networks outside of NLP. Mangalam & Prabhu (2019) show that neural networks learn “easy” examples (as defined by shallow machine learning model performance) before they learn “hard” examples. Zhang et al. (2016) and Arpit et al. (2017) show that neural networks which are capable of memorizing noise nonetheless acheive good generalization performance, suggesting that such models might have an inherent preference to learn more general features. Finally, ongoing theoretical work characterizes the ability of over-parameterized networks to generalize in terms of complexity (Neyshabur et al., 2019) and implicit regularization (Blanc et al., 2020).
Concurrent work (Warstadt et al., 2020b) also investigates the inductive biases of large pre-trained models (RoBERTa), in particular, they ask when (at what amount of pre-training data) such models shift from a surface feature (what we call spurious features) to a linguistic feature (what we call a target feature). In our work, we focus on how to predict which of these two biases characterize the model (via relative MDL).
7 CONCLUSION
This work bears on an open question in NLP, namely, the question of how models’ internal representations (as measured by probing classifiers) influence model behavior (as measured by challenge sets). We find that the feature extractability can be viewed as an inductive bias: the more extractable a feature is after pre-training, the less statistical evidence is required in order for the model to adopt the feature during fine-tuning. Understanding the connection between these two measurement techniques can enable more principled evaluation of and control over neural NLP models.
ACKNOWLEDGEMENTS
We would like to thank Michael Littman for helpful suggestions on how to better present our findings and Ian Tenney for insightful comments on a previous draft of this work. We also want to thank our reviewers were their detailed and helpful comments. This work is supported by DARPA under grant number HR00111990064. This research was conducted using computational resources and services at the Center for Computation and Visualization, Brown University.
A ADDITIONAL RESULTS
Figure 6, 7, 8, 9, 10 show additional results for all models over all partitions (both accuracy, neither accuracy, and F-score). These charts appear at the end of the Appendix.
Details on the MDL statistics are available in Table 3.
A.1 BEYOND ACCURACY?
For the transformer models, for 18/20 feature pairs, the models are able to solve all the spurious and target features in isolation during probing. (They do solve the test set in all cases–its that two
of the spurious features ended up being very difficult for the models.) During the reviews, we did not control for the cases where the model did not solve the probing task. These 2 extraneous points accentuate the lineplot curves, but do not change the character of the results (nor much adjust the correlations). In the paper, now, we control for accuracy by filtering out these cases. With or without this control, the accuracy provides no predictive power about the inductive biases. We present the correlations without filtering for these cases for consistency with the reviews (Table 4 above); we believe it is important to control for these cases because they could have acted as giveaways, where even accuracy might have worked.
A.2 ALTERNATE METRIC: s-RATE?
We initially used a different metric when computing the correlations to compact the lineplots. Rather than using the average test performance, we looked at the evidence required for the model to solve the test set. Both of these metrics conceptually capture what we are interested in, but the new one (simply averaging test performance) is much easier to understand, and captures the performance across all partitions. Here we report the correlations with this evidence required metric instead, which we called s-rate?. Specifically, we defined it to be: s-rate? is the lowest s-only example rate at which the fine-tuned model achieves essentially perfect performance (F-score> 0.99) (see Figure 5a). Intuitively, s-rate? is the (observed) minimum amount of evidence from which the model infer that t alone is predictive of the label. See Table 5b for the results.
s-rate★ Threshold: 0.99
(a) Evidence Required: s-rate?
Absolute Relative (t to s) Target Spurious Ratio Diff
BERT 0.68* -0.63* -0.79* -0.81* RoBERTa 0.04 -0.69* -0.71* -0.69* T5 0.81* -0.03 -0.55* -0.65* GPT2 -0.11 -0.24 -0.29 -0.32 GloVe 0.29 -0.38 -0.48 -0.48
(b) Using Evidence Required (s-rate?) instead of Average Fscore. The correlations are negative instead of positive: As the extractibility increases, less evidence is required for the model to perform well.
A.3 RESULTS USING AUC INSTEAD OF MDL
See Table 5. A metric similar to MDL for capturing the same intuition is the area under the validation loss curve (AUC). This metric is highly related to online MDL in computation.
B IMPLEMENTATION DETAILS & REPRODUCIBLITY
Our code is available at: https://github.com/cjlovering/predicting-inductive-biases.
There are two major parts to this project in terms of reproducibility: (1) the data and (2) the model implementations. We describe the templates for the data below in Appendix D – the full details are in the project source. For the transformer models, we use Hugging Face for the implementations and access to the pre-trained embeddings (Wolf et al., 2020). We use PyTorch Lightning to organize the training code (Falcon, 2019). We fix all hyperparameters, which are reported in Table 6.
We want to call attention to BERT requiring much less data than T5 to capture our target features. At face value, it seems that BERT requires much less data than T5 to capture our target features. However, we are wary about making such strong claims. Something to consider here (noted in the Appendix B is that for T5 we used a linear model rather than formatting the task in text (which is how T5 is trained). We made this decision (1) because we had trouble training T5 in this purely textual manner, and (2) using a linear classification head over two classes is consistent with the other model architectures. Again, GPT2 and RoBERTa performed on par with BERT, so the difference between the performance of BERT and T5 may be due to how we trained T5.
C MEASURING EXTRACTABILITY INDIRECTLY
We measure the MDL for t with both and s-only examples. In the simulated setting we can compare this approach with measuring the MDL directly (t-only vs neither). See Table 7 for MDL results. The ordering of the feature’s difficulty holds across the two methods.
D TEMPLATES FOR NATURALISTIC DATA
Each template corresponds to a combination of target features, grammars, and spurious features (the target and spurious features are discussed in Section 4.1). See Table 8 for a complete list of templates. See Table 10 for further details about the templates that are used for each of the the target features. Complete details about implementation of these templates (and all data) will be released upon acceptance.
E WHY EXACTLY IS IT HARD TO GENERATE T-ONLY EXAMPLES?
Target features may be unavoidably linked to spurious ones. For example, for a Negative Polarity Item to be licensed (perhaps smoothing over some intricacies) the NPI (“any”, “all”, etc) must be a downward entailing context. These downward entailing contexts are created by triggers, e.g., if a negative word like “no” or “not” or a quantifier like “some”. Linguists who study the problem have assembled a list of such triggers (see Hoeksema (2008)). Arguably, one cannot write down a correct example of NLP licensing that doesn’t contain one of these memorizable triggers. Thus, we cannot train or test models on correct examples of NPI usage while simultaneously preventing it from having access to trigger-specific features.
Similar to the NPI example, it’s not possible (to our knowledge) to construct target-only examples for filler-gap since construction requires a wh-word and syntactic gap; thus, we can’t create a positively labeled, grammatical sentence that exhibits a Filler Gap without these elements.
In summary, target-only examples may add new spurious features (as with NPI), or be impossible to construct because the presence of the target feature implies the presence of the spurious feature (as with filler gaps). Still, our setup permits the MDL to be computed directly with target-only examples, and so, in cases where it is feasible to create target-only examples (e.g. the Subject-Verb Agreement templates), it would have bolstered our argument to do so.
F MDL ISSUES: OVERFITTING IN THE SYNTHETIC EXPERIMENTS
We found that the MDL exceeds the uniform code length in some of the synthetic experiments. We found that this occurs because the model overfits on the small early-block sizes. See Figure 11. | 1. What is the main contribution of the paper in bridging the gap between model interpretation and spurious features?
2. What are the strengths of the paper, particularly in its premise and presentation of results?
3. What are the weaknesses of the paper regarding its assumptions and scalability to real-world datasets?
4. Do you have any questions regarding the methodology and findings of the paper, especially in comparison to prior works?
5. Are there any minor comments or suggestions for improvement in the paper? | Review | Review
The paper aims to bridge the gap between model interpretation using probing and model's use of spurious features. They show that the findings of MDL with respect to a feature correlate with the extractability of the feature, given the evidence of representing the feature is available in the training data. The results are presented using both synthetic and natural language data.
I really like the premise of the paper, which is connecting the research on the linguistic learning of a model with the presence of important and spurious features in the data.
One issue I have with the work is the simplistic assumptions that are likely to be different in the real-world data. Real-world data may have various spurious features and it is possible that not one feature alone is playing a role in pushing the model to rely on spurious features. It can be a combination of spurious features plus the relative presence of important features. It is hard to imagine how this method will scale to real-world datasets. I would like the authors to comment on it.
Moreover, the findings are quite expected. In general, the probing methods including MDL were mainly aimed at analyzing the linguistic learning of the representations. In that case, MDL is scoring the representations with respect to various linguistic properties. Here, the authors are using MDL to look at how input features are represented in the model. Statistically, MDL is likely to look at the same things as the model is looking at since both of them are based on the same training data and input features. Please comment on this, in case I misunderstood the point.
Minor comments:
what is the reason for low performance when using Glove? |
ICLR | Title
Predicting Inductive Biases of Pre-Trained Models
Abstract
Most current NLP systems are based on a pre-train-then-fine-tune paradigm, in which a large neural network is first trained in a self-supervised way designed to encourage the network to extract broadly-useful linguistic features, and then finetuned for a specific task of interest. Recent work attempts to understand why this recipe works and explain when it fails. Currently, such analyses have produced two sets of apparently-contradictory results. Work that analyzes the representations that result from pre-training (via “probing classifiers”) finds evidence that rich features of linguistic structure can be decoded with high accuracy, but work that analyzes model behavior after fine-tuning (via “challenge sets”) indicates that decisions are often not based on such structure but rather on spurious heuristics specific to the training set. In this work, we test the hypothesis that the extent to which a feature influences a model’s decisions can be predicted using a combination of two factors: The feature’s extractability after pre-training (measured using information-theoretic probing techniques), and the evidence available during finetuning (defined as the feature’s co-occurrence rate with the label). In experiments with both synthetic and naturalistic data, we find strong evidence (statistically significant correlations) supporting this hypothesis.
1 INTRODUCTION
Large pre-trained language models (LMs) (Devlin et al., 2019; Raffel et al., 2020; Brown et al., 2020) have demonstrated impressive empirical success on a range of benchmark NLP tasks. However, analyses have shown that such models are easily fooled when tested on distributions that differ from those they were trained on, suggesting they are often “right for the wrong reasons” (McCoy et al., 2019). Recent research which attempts to understand why such models behave in this way has primarily made use of two analysis techniques: probing classifiers (Adi et al., 2017; Hupkes et al., 2018), which measure whether or not a given feature is encoded by a representation, and challenge sets (Cooper et al., 1996; Linzen et al., 2016; Rudinger et al., 2018), which measure whether model behavior in practice is consistent with use of a given feature. The results obtained via these two techniques currently suggest different conclusions about how well pre-trained representations encode language. Work based on probing classifiers has consistently found evidence that models contain rich information about syntactic structure (Hewitt & Manning, 2019; Bau et al., 2019; Tenney et al., 2019a), while work using challenge sets has frequently revealed that models built on top of these representations do not behave as though they have access to such rich features, rather they fail in trivial ways (Dasgupta et al., 2018; Glockner et al., 2018; Naik et al., 2018).
In this work, we attempt to link these two contrasting views of feature representations. We assume the standard recipe in NLP, in which linguistic representations are first derived from large-scale selfsupervised pre-training intended to encode broadly-useful linguistic features, and then are adapted for a task of interest via transfer learning, or fine-tuning, on a task-specific dataset. We test the
hypothesis that the extent to which a fine-tuned model uses a given feature can be explained as a function of two metrics: The extractability of the feature after pre-training (as measured by probing classifiers) and the evidence available during fine-tuning (defined as the rate of co-occurrence with the label). We first show results on a synthetic task, and second using state-of-the-art pre-trained LMs on language data. Our results suggest that probing classifiers can be viewed as a measure of the pre-trained representation’s inductive biases: The more extractable a feature is after pre-training, the less statistical evidence is required in order for the model to adopt the feature during fine-tuning.
Contribution. This work establishes a relationship between two widely-used techniques for analyzing LMs. Currently, the question of how models’ internal representations (measured by probing classifiers) influence model behavior (measured by challenge sets) remains open (Belinkov & Glass, 2019; Belinkov et al., 2020). Understanding the connection between these two measurement techniques can enable more principled evaluation of and control over neural NLP models.
2 SETUP AND TERMINOLOGY
2.1 FORMULATION
Our motivation comes from McCoy et al. (2019), which demonstrated that, when fine-tuned on a natural language inference task (Williams et al., 2018, MNLI), a model based on a state-of-the-art pre-trained LM (Devlin et al., 2019, BERT) categorically fails on test examples which defy the expectation of a “lexical overlap heuristic”. For example, the model assumes that the sentence “the lawyer followed the judge” entails “the judge followed the lawyer” purely because all the words in the latter appear in the former. While this heuristic is statistically favorable given the model’s training data, it is not infallible. Specifically, McCoy et al. (2019) report that 90% of the training examples containing lexical overlap had the label “entailment”, but the remaining 10% did not. Moreover, the results of recent studies based on probing classifiers suggest that more robust features are extractable with high reliability from BERT representations. For example, given the example “the lawyer followed the judge”/“the judge followed the lawyer”, if the model can represent that “lawyer” is the agent of “follow” in the first sentence, but is the patient in the second, then the model should conclude that the sentences have different meanings. Such semantic role information can be recovered at > 90% accuracy from BERT embeddings (Tenney et al., 2019b). Thus, the question is: Why would a model prefer a weak feature over a stronger one, if both features are extractable from the model’s representations and justified by the model’s training data?
Abstracting over details, we distill the basic NLP task setting described above into the following, to be formalized in the Section 2.2. We assume a binary sequence classification task where a target feature t perfectly predicts the label (e.g., the label is 1 iff t holds). Here, t represents features which actually determine the label by definition, e.g., whether one sentence semantically entails another. Additionally, there exists a spurious feature s that frequently co-occurs with t in training but is not guaranteed to generalize outside of the training set. Here, s (often called a “heuristic” or “bias” elsewhere in the literature) corresponds to features like lexical overlap, which are predictive of the label in some datasets but are not guaranteed to generalize.
Assumptions. In this work, we assume there is a single t and a single s; in practice there may be many s features. Still, our definition of a feature accommodates multiple spurious or target features. In fact, some of our spurious features already encompass multiple features: the lexical feature, for example, is a combination of several individual-word features because it holds if one of a set of words is in the sentence. This type of spurious feature is common in real datasets: E.g., the hypothesis-only baseline in NLI is a disjunction of lexical features (with semantically unrelated words like “no”, “sleeping”, etc.) (Poliak et al., 2018b; Gururangan et al., 2018).
We assume that s and t frequently co-occur, but that only s occurs in isolation. This assumption reflects realistic NLP task settings since datasets always contain some heuristics, e.g., lexical cues, cultural biases, or artifacts from crowdsourcing (Gururangan et al., 2018). Thus, our experiments focus on manipulating the occurrence of s alone, but not t alone: This means giving the model evidence against relying on s. This is in line with prior applied work that attempts to influence model behavior by increasing the evidence against s during training (Elkahky et al., 2018; Zmigrod et al., 2019; Min et al., 2020).
Published as a conference paper at ICLR 2021
2.2 DEFINITIONS
Let X be the set of all sentences and S be the space of all sentence-label pairs (x, y) ∈ X × {0, 1}. We use D ⊂ S to denote a particular training sample drawn from S. We define two types of binary features: target (t) and spurious (s). Each is a function from sentences x ∈ X to a binary label {0, 1} that indicates whether the feature holds.
Target and spurious features. The target feature t is such that there exists some function f : {0, 1} → {0, 1} such that ∀(x, y) ∈ S, f(t(x)) = y. In other words, the label can always be perfectly predicted given the value of t.1 A feature s is spurious if it is not a target feature.
Partitions of S. To facilitate analysis, we partition S in four regions (Figure 1). We define Ss-only to be the set of examples in which the spurious feature occurs alone (without the target). Similarly, St-only is the set of examples in which the target occurs without the spurious feature. Sboth and Sneither are analogous. For clarity, we sometimes drop the S∗ notation (e.g., s-only in place of Ss-only).
t-only both s-only
neither
S D
(a) (b)
Sboth “ tpx, yq | tpxq “ 1 ^ spxq “ 1u Sneither “ tpx, yq | tpxq “ 0 ^ spxq “ 0u St-only “ tpx, yq | tpxq “ 1 ^ spxq “ 0u Ss-only “ tpx, yq | tpxq “ 0 ^ spxq “ 1u
<latexit sha1_base64="bSiqwQANSN306jL/eI6aFqdfpQw=">AAA00nicnVtbc9zKcaYcx3EYxzlOHv2CCiX7nBTFInWYil9cZd5JcXlfXqSzOiosthcLETdhZrGk4E1VKi9+8Evyg/I//F/8kJ4ZDLoHC/LEYZVE9PfNNOby9XQvFhzmcSTk+vqfXvzor3781z/5m5/+7fLf/ezvf/4PX/3iH29ENi0CuA6yOCvuhr6AOErhWkYyhru8AD8ZxnA7vN9R/G0JhYiytC8fc/iQ+GEajaPAlwh9/OrPHv5cfawGEh5kkVQvh5mcvJzPf/XbQfX1w+rjN4PV3w9W5dcP3/x2YzCDUQieMMbcGwyW271TiOQEim4H69zBercD+fJ1lsaPPzyAJ/qLZ/qvtyaw/PGrlfW1df3jLV5s1BcrS/XP+cdfvPmfwSgLpgmkMoh9Ib7bWM/lh8ovZBTEMF8eTAXkfnDvh/AdXqZ+AuJDpbdp7r1CZOSNswL/pdLTKO9R+YkQj8kQWya+nIg2p8Au7rupHP/mQxWl+VRCGpgbjaexJzNP7bk3igoIZPyIF35QRDhWL5j4hR9IVIZzl2GW3Ut/KNCHt68HWiQ+OvJRT25LNQqZZbFuGqACxfICX4ixmHfO0AHFdDiOwrnnzCnVd4bkQzXF/7s6rHrjOPPlfOG2/vBhNSz8fBIFDy4rs1GWZhJao5phYzWCp2bQuRFtUO3p/2W2z4/w/+N9efmV+vFO9269k63+obe7t390etQ/Oju98jTVOYNV63Z1iOu7/Mo78Yt7T6BS8GQQXjbGbc3NtdJsAWMoiigNlaxGURkJ2wyXblooeaQwC7Ik8dNRNUAwhrGcV9UAEu/rHl5/M58vtAkwkqCwrXa01dWuiMJJ4+xSGV2tZJbbNv0s72qBx5vMEttoW1sL7ep5+7aZ/1SLoW0xfKpFYFsET7UY2RYj1QK34RBnF6sZer6H7VXYwhiP95GHa5O4PvBagfPvNj6gl+HYW9mYa0F4+3pTzK7hoQCrXpzNoHitQnVteYAu9bLCeGWjMhv47wO0Ku2gqzsON5J+vKbPBSHxyFN7L9SOIW887luP+22PmpazzN5z5U19V+HZRh7OqDbe2B6fp/6Iuqx8u7K50G216WOvvuWuNvV0royqn10OVL4ZfB0Cznp0OLALYnpf2d5XHb0vbS99JM+yJsrWmoUxdxd6ZZoYfGJp2g4nBUDbJfO38u2iR1o15vvbRd9+6gFugurcsWTw2czZNnl60o6faZ5DoZOGcbNXu9nrcrPlFf6M1r3l7PXr136ZRSNvKtTRFI29PBMiwnRlXOexj6FT+396dCof5hhJHXNUjOlet/mLJ1k72mkc7fygI5xzipWKOoNNW2F8aLgZEUrF0tbV69dPygRH58dhhvl/knTMEzkzuqbRsxNlrhZmumVdbXW4soK398NJNL6ePwz6TqetH+y0sKg5Hlr1zJn6FGqGq66e2xTTv63e86b/udvfzrS5AY5aXT854FpwEMVKrLG6wAMdG6ir2h/WPFmhaX1leH1ZN0BqmFQb7WwjCwyEeTVQmT/w42q33aD042jEG3w011hVG2q+4BKE7O6gmXkzI8iFynG5iOIsrfPTJbrIEq/0i6guLrW+QfqVqeZNAVi9HCD0cm6Xs2jRPjFDlxkSE7hMQMzIZUbEgMsAMWOXGRMTukxIzMRlJsRELhMR88llPhFz7zL3xMQuE8+1jIvEiwRGLH4wHD2qw87s4Kr3aSqkhyXxr6WnPqqgHB/VyeNsjJfUvlPXd0p3zVwmIyZ3mZyYzy7zmZjCZQpihMsIYqTLSGKmLjMlpnSZkpiZy8yIeXCZB2IeXeaRmC8u82VuyjwbAJiZs+Z4L+sgqUwoDccsbJpx4ydsHSW2hbYZzzgODwlmsVEGBLPAKEcEs6gogWAWEuWYYBYPZUgwC4ZyQjCLhHJKMAuD8hPBLAbKe4JZAJQxwTGDE4ITBrOF5iucEczEXOYEMyWXnwlmMi4LgpmGS0Gw4JtKsOxeEy7dkmCm23JGMBNt+UAwU2z5SDCTa/mFYKvVPfwIjoW+/rRXdOgWjOg6z2Uwyus8mcHIr/NsBqPBztMZjBA7z2cwauw8ocFIsvOMBqPLzlMauSfPaTAK7Typwci086wGo9X2aW25xOUSzj15EoORbudZDEa/nacxGBF3nsdglNx5IoORc+eZDEbTnacyGGF3nstg1N15MoOReOfZDEbnnaczGLF3ns9gFP/0CY2xUERBU6EkWxQfWxQ2yTbB2wzeIXiHwbsE7zJ4j+A9Bu8TvM/gA4IPGHxI8CGDjwg+YvBbgt8y+JjgYwb3CO4x+ITgEwafEnzK4DOCzxh8TvA5gy8IvmDwJcGXDL4i+IrBfYL7DL4m+JrBNwTfMPiW4FsG3xF8x+B3BL9j8HuC3z99vLqiA6M6ptEtpl8tPcZtc27H5XY4t+tyu5zbc7k9zu273D7nDlzugHOHLnfIuSOXO+LcW5d7y7ljlzvmXM/lepw7cbkTzp263CnnzlzujHPnLnfOuQuXu+Dcpctdcu7K5a4413e5PueuXe6aczcud8O5W5e75dydy91x7p3LvePce5ezsr/hJUT5BfTnCPzsut70LbMUKvt51mLJ1ECDhJJGUxMr3K2HyxpmyNAgVIfoKgQRqj507YEI1RxlPRKqNHSdgQjVF7q6QISqCl1TIEK1hK4kEKEKQtcPiFDdoKsGRKha0LUCIjFbB4NQZaDrAkRStn4GoSpA1wCIUO7XmR8Ryvg63yNCeV5neUQEW3CDUE4v621hm1IahPK3zt6IUNbWORsRytU6UyNCGVrnZ0S6qlG3DC39OJ+o/da/GwWWw1ocWhcWpI9a9GSipmI/GY5UD3NBRJZAqHD9m2AtSSVHC6BDRPB/gkQUJqqr/k2wFW4t2mYiVcXHXymxWgvFGpCFQh2xSVVKoNZCgY7JQnGGZKEwJ2ThcNlYUZCfyEIx3rO1qZQIm5lXSoDWwsVkq4jiy9iSVEp01kLRfSYLBVewlaqU0JoFqpTIrIULzZYZBVaSheKakYXCeiALRfVIFgrqy7z+zgvz7IPBdY5FnVFu1ZkVEcqoOp8iQnlUZ1FEKHvq3IkI5UydMRGhTKnzJCKUH3V2RISyos6JiFAu1JkQEcqAOv8hQnlPZz1EKNvpXIcI5Tid4RChzKbzGiKUz3Q2Q4SymM5hiFDu0pkLEcpYOl8hQnlKZylEKDvp3IQI5SSdkRChTKTzECKUf3T2QYSyjs45iFCu0ZkGkfdsBykvDHlaSM4n9UE8wCu2ejb0FdOrw7+ZXB3DirsycaxV1IdUqO/ydyGI/QJQVJMtdQLhHU2xJ8aRelQKaZCNojREZ/40VogYN9fJvBLqKe8VyKccDLN49ENuhg9zDML2k9pU6O8ITd6s/emn1PXUpKkvU8HUL7ctRvqXOxajCJC7FqMYkHsWoyiQ+xajOJAHFqNIkIcWo1iQRxajaJBvLUbxII8tRhEhexajmJAnFqOokKcWo7iQZxajyJDnFqPYkBcWo+iQlxaj+JBXFqMIkX2LUYzIa4tRlMgbi1GcyFuLUaTIO4tRrMh3FqNoke8tZioyFPKBeinBsKH9nBs4HzfCbQaTLsIdBpM0wl0GkzrCPQaTQMJ9BpNGwgMGk0zCQwaTUsIjBpNYwrcMJr2ExwwmyYQ9BpNqwhMGk3DCUwaTdsIzBpN8wnMGk4LCCwaTiMJLBpOOwisGk5TCPoNJTeE1g0lQ4Q2DSVPhLYNJVuEdg0lZ4TsGk7jC9wy2FT8ebXWpJpqnKEMmLrFNKGlL7BBK0hK7hGplvfJ29TcZUwGe7wmQHt46hpG3t+oNIfAVLieR8GbZNB4hhBZ4Qn/vgbXktPDU6zRZjI7U+y7wkGNtqb/MtV+p79MdSZ3igFASpzgklLQpjgglaYq3hJIyxTGhJEzRI5R0KU4IJVmKU0JJleKMUBKlOCeUNCkuCCVJiktCSZHiilASpOgTSnoU14SSHMUNoaRGcUsoiVHcEUpaFO8IJSmK94Q2j1xSrPtAf4TwzcOWuggEqgB6bvGvysMtslCq22ShRHfIQmnukoWH3R5ZKKJ9slA8B2ShaA7JQrEckYUieUsWiuOYLBRFjywUwwlZKIJTsnDzz8jCTT8nCzf7gizc5EuycHOvyMJN7ZOFm3lNFm7iDVm4ebdk4abdkYWb9Y4s3KT37H51pVVXWWrLgG+ZNBUXHikqfvX7lBjEBl31ZpGcZFPpYbnjqZf7cijcggioInKqofr2stGAbrhQCIIul6BVL4EumKBVMYEumaBVM4EumqBVNYEum6BVN4EunKBVOYEunaBVO4EunqBVPYEun6BVP4EuoKBVQYEuoaBVQ4EuoqBVRYEuo6BVR4EupKBVSYEupaBVS4EupqBVTYEup6BVT4EuqKBVUYEuqaBVU4EuqqBVVYEuq6BVV4EurKBVWYEuraBVW4EurqBVXYEur6BVX4EusIBVWPhJAVOOLKbgTdMRFPGjemlp5EvfCyGFArONsiOBSh9OVepxZZurpvMq/1gNiqTShk58yiskeVREmPKc/s27g8NHne70ayDqJpgfW77tGyITX+IndfcWTstz3vJ83jWYJBtB/NxEdINmJsZauE/d6Py5RrmM4hHULQfaaEbf9MBjQmbBxBfq1Wd/KjP9CQoKZ4StF1hz06YZY91lcQAjcNoZs6NdgQQeOradMVELgX6G5jaO/Tz2A5g3b9T0akC9bV1fu8vr9t+bNxlvrz2QnmAv7fTa7OWc5/bWqZnkbI1bZFzM7XM3lyggnDdP0tpUIGmOyorGERRt1yIby8R/oJYWaLfDZJHpl5jMQ7ZFL3k8VbP/op4EuOxxb87fYDruLWzgjV/QCJTR9i/xl1/g3hcZa3m1sAE7WUm0MpRAb7N4XPiJeiA1mWUFFqjCfxTey973b16qt3f0nw1MU/MWqshx/4V+e+zlAOKYtbEPRF9525gAMeRT9d8jxjsk6i02VQUbp6y1eoU0m4Y6Z+qiOJKwqt2LzBtloNzNovsoh1Hkr7VeQc6KJFYP7+dV7/v1eQeZpaC4jS5OznS/N11crpi8g9Fa6H0/iNKxfGyHTu4X6uEwHhu+CpYrwLNW+CF4UeqlWV3QS3hY83YmmVDLk6kCMJh4u/jZN4VfC0/9FcTasvM45yxXp3NW/AtqvAj1APD3YFVdPddQnZOmIV51u9RqxWb6/yda9FFQffWGXwxy4A8xzuJsNizAv2/NXk7zGExhE/tpGIN5QVBf4v0/frWy0f77lsWLmzdrG5tr/3qxufK77fpvX3669Mulf176emlj6d+Wfrd0uHS+dL0UvBi++MOL/3rx35v9zS+b/7H5n6bpj17Uff5pyfnZ/OP/AsWXYDg=</latexit>
Figure 1: We partition datasets into four sections, defined by the features (spurious and/or target) that hold. We sample training datasets D, which provide varying amounts of evidence against the spurious feature, in the form of s-only examples. In the illustration above, the s-only rate is 210 = 0.2, i.e., 20% of examples in D provide evidence that s alone should not be used to predict y.
Evidence from Spurious-Only Examples. We are interested in spurious features which are highly correlated with the target during training. Given a training sample D and features s and t, we define the s-only example rate as the evidence against the use of s as a predictor of y. Concretely, s-only rate = |Ds-only| / |D|, the proportion of training examples in which s occurs without t (and y = 0).
Use of Spurious Feature. If a model has falsely learned that the spurious feature s alone is predictive of the label, it will have a high error rate when classifying examples for which s holds but t does not. We define the s-only error to be the classifier’s error on examples from Ss-only. When relevant, t-only error, both error, and neither error are defined analogously. In this work, “feature use” is a model’s predictions consistency with that feature; we are not making a causal argument.
Extractability of a Feature. We want to compare features in terms of how extractable they are given a representation. For example, given a sentence embedding, it may be possible to predict multiple features with high accuracy, e.g., whether the word “dog” occurs, and also whether the word “dog” occurs as the subject of the verb “run”. However, detecting the former will no doubt be an easier task than detecting the latter. We use the prequential minimum description length (MDL) Rissanen (1978)–first used by Voita & Titov (2020) for probing–to quantify this intuitive difference.2 MDL is an information-theoretic metric that measures how accurately a feature can be decoded and the amount of effort required to decode it. Formally, MDL measures the number of bits required to communicate the labels given the representations. Conceptually, MDL can be understood as a measure of the area under the loss curve: If a feature is highly extractable, a model trained to detect that feature will converge quickly to high accuracy, resulting in a low MDL. Computing MDL requires repeatedly training a model over a dataset labeled by the feature in question. To compute MDL(s), we train a classifier (without freezing any parameters) to differentiate Ss-only vs. Sneither, and similarly compute MDL(t). See Voita & Titov (2020) for additional details on MDL.3
1Without loss of generality, we define t in our datasets s.t. t(x) = y,∀x, y ∈ S. We do this to iron out the case where t outputs the opposite value of y.
2We observe similar overall trends when using an alternative metric based on validation loss (Appendix A.3).
3Note that our reported MDL is higher in some cases than that given by the uniform code (the number of sentences that are being encoded). The MDL is computed as a sum of the costs of transmitting successively
2.3 HYPOTHESIS
Stated using the above-defined terminology, our hypothesis is that a model’s use of the target feature is modulated by two factors: The relative extractability of the target feature t (compared to the spurious feature s), and the evidence from s-only examples provided by the training data. In particular, we expect that higher extractability of t (relative to s), measured by MDL(s)/MDL(t), will yield models that achieve better performance despite less training evidence.
3 EXPERIMENTS WITH SYNTHETIC DATA
Since it is often difficult to fully decouple the target feature from competing spurious features in practice, we first use synthetic data in order to test our hypothesis in a clean setting. We use a simple classifier with an embedding layer, a 1-layer LSTM, and an MLP with 1 hidden layer with tanh activation. We use a synthetic sentence classification task with k-length sequences of numbers as input and binary labels as output. We use a symbolic vocabulary V with the integers 0 . . . |V | − 1. We fix k = 10 and |V | = 50K. We begin with an initial training set of 200K, evenly split between examples from Sboth and Sneither. Then, varied across runs, we manipulate the evidence against the spurious feature (i.e., the s-only rate) by replacing a percentage p of the initial data with examples from Ss-only for p ∈ {0%, 0.1%, 1%, 5%, 10%, 20%, 50%}. Test and validation sets consist of 1,000 examples each from Sboth, Sneither, St-only, Ss-only. In all experiments, we set the spurious feature s to be the presence of the symbol 2. We consider several different target features t (Table 1), intended to vary in their extractability. Table 1 contains MDL metrics for each feature (computed on training sets of 200K, averaged over 3 random seeds). We see some gradation of feature extractability, but having more features with wider variation would help solidify our results.4
Figure 2 shows model performance as a function of s-only rate for each of the four features described above. Here, performance is reported using error rate (lower is better) on each partition (Ss-only, St-only, Sboth, Sneither) separately. We are primarily interested in whether the relative extractability of the target feature (compared to the spurious feature) predicts model performance. We indeed see a fairly clear relationship between the relative extractability (MDL(s) / MDL(t)) and model performance, at every level of training evidence (s-only rate). For example, when t is no less extractable than s (i.e., contains-1), the model achieves zero error at an s-only rate of 0.001, meaning it learns that t alone predicts the label despite having only a handful of examples that support this inference. In contrast, when t is harder to extract than s (e.g., first-last), the model fails to make this inference, even when a large portion of training examples provide evidence supporting it.
4 EXPERIMENTS WITH NATURALISTIC DATA
We investigate whether the same trend holds for language models fine-tuned with naturalistic data, e.g., grammar-generated English sentences. To do this, we fine-tune models for the linguistic acceptability task, a simple sequence classification task as defined in Warstadt & Bowman (2019),
longer blocks, using classifiers that are trained on previously transmitted data. The high MDL’s are a result of overfitting by classifiers that are trained on limited data–and therefore, the classifiers have worse compression performance than the uniform baseline.
4Note, all models are ultimately able to learn to detect t (achieve high test accuracy) on the both partition, but not on the t-only partition.
in which the goal is to differentiate grammatical sentences from ungrammatical ones. We focus on acceptability judgments since formal linguistic theory guides how we define the target features, and recent work in computational linguistics shows that neural language models can be sensitive to spurious features in this task (Marvin & Linzen, 2018; Warstadt et al., 2020a).
4.1 DATA
We design a series of simple natural language grammars that generate a variety of feature pairs (s, t), which we expect will exhibit different levels of relative extractability (MDL(s) / MDL(t)). We focus on three syntactic phenomena (described below). In each case, we consider the target feature t to be whether a given instance of the phenomenon obeys the expected syntactic rules. We then introduce several spurious features s which we deliberately correlate with the positive label during fine-tuning. The Subject-Verb Agreement (SVA) construction requires detecting whether the verb agrees in number with its subject, e.g., “the girls are playing” is acceptable while “the girls is playing” is not. In general, recognizing agreement requires some representation of hierarchical syntax, since subjects may be separated from their verbs by arbitrarily long clauses. We introduce four spurious features: 1) lexical, grammatical sentences begin with specific lexical items (e.g., “often”); 2) length, grammatical sentences are longer; 3) recent-noun, verbs in grammatical sentences agree with the immediately preceding noun (in addition to their subject); and 4) plural, verbs in grammatical sentences are preceded by singular nouns as opposed to plural ones.
The Negative Polarity Items (NPI) construction requires detecting whether a negative polarity item (e.g., “any”, “ever”) is grammatical in a given context, e.g., “no girl ever played” is acceptable while “a girl ever played” is not. In general, NPIs are only licensed in contexts that fall within the scope of a downward entailing operator (such as negation). We again consider four types of spurious features: 1) lexical, in which grammatical sentences always include one of a set of lexical items (“no” and “not”); 2) length (as above); 3) plural, in which each noun in a grammatical sentence is singular, as opposed to plural; and 4) tense, in which grammatical sentences are in present tense.
Some verbs (e.g. “recognize”) require a direct object. However, in the right syntactic contexts (i.e., when in the correct syntactic relation with a wh-word), the object position can be empty, creating what is known as a “gap”. E.g., “I know what you recognized ” is acceptable while “I know that you recognized ” is not. The Filler-Gap Dependencies (GAP) construction requires detecting
whether a sentence containing a gap is grammatical. For our GAP tasks, we again consider four spurious features (lexical, length, plural, and tense), defined similarly to above.
The templates above (and slight variants) result in 20 distinct fine-tuning datasets, over which we perform our analyses (see Appendix for details). Table 2 shows several examples. For the purposes of this paper, we are interested only in the relative extractability of t vs. s given the pre-trained representation; we don’t intend to make general claims about the linguistic phenomena per se. Thus, we do not focus on the details of the features themselves, but rather consider each template as generating one data point, i.e., an (s, t) pair representing a particular level of relative extractability.
4.2 SETUP
We evaluate T5, BERT, RoBERTa, GPT-2 and an LSTM with GloVe embeddings (Raffel et al., 2020; Devlin et al., 2019; Liu et al., 2019b; Radford et al., 2019; Pennington et al., 2014).5 Both T5 and BERT learn to perform well over the whole test set, whereas the GloVe model struggles with many of the tasks. We expect that this is because contextualized pre-training encodes certain syntactic features which let the models better leverage small training sets (Warstadt & Bowman, 2020). Again, we begin with an initial training set of 2000 examples, evenly split between both and neither, and then introduce s-only examples at rates of 0%, 0.1%, 1%, 5%, 10%, 20%, and 50%, using three random seeds each. Test and validation sets consist of 1000 examples each from Sboth, Sneither, Ss-only. In the natural language setting, it is often difficult to generate t-only examples, and thus we cannot compute extractability of the target feature t by training a classifier to distinguish St-only from a random subset of Sneither, as we did in Section 3. Therefore, we estimate MDL by training a classifier to distinguish between examples from Ss-only and examples from Sboth. Using the simulated data from Section 3, we confirm that both methods (Ss-only vs. Sboth and St-only vs. Sneither) produce similar estimates of MDL(t) (see Appendix). Per model, we filter out feature pairs for which the model could not achieve at least 90% accuracy on each probing task in isolation.6
4.3 RESULTS
For each (s, t) feature pair, we plot the use of the spurious feature (s-only error) as a function of the evidence against the spurious feature seen in training (s-only example rate).7 We expect to see the same trend we observed in our synthetic data, i.e., the more extractable the target feature t is relative to the spurious feature s, the less evidence the model will require before preferring t over s. To quantify this trend, we compute correlations between 1) the relative extractability of t compared to s and 2) the test F-score averaged across all rates and partitions of the data8, capturing how readily the model uses (i.e., makes predictions consistent with the use of) the target feature.
5In pilot studies, we found that standard BOW and CNN-based models were unable solve the tasks. 6This control does not impact results: Appendix A.1. 7See Appendix for both error and neither error; both are stable and low in general. 8Initially, we used a more complicated metric based on the s example rate required for the model to solve
the test set. Both report similar trends and correlations. For posterity, we include details in the Appendix A.2.
Figure 3 shows these correlations and associated scatter plots. We can see that relative extractability is strongly correlated with average test F-score (Figure 3a), showing high correlations for both BERT (ρ = 0.79) and T5 (ρ = 0.57). That is, the more extractable t is relative to s, the less evidence the model requires before preferring t, performing better across all partitions. This relationship holds regardless of whether relative extractability is computed using a ratio of MDL scores or an absolute difference. We also see that, in most cases, the relative extractability explains the model’s behavior better than does the extractability of s or t alone. For GloVe there is little variation in model behavior: For most of the 11/20 pairs on which the model is able to learn the task, it requires an s-only example rate of 0.5. Thus, the correlations are weak, but qualitative results appear steady (Figure 8 in Appendix A), following the pattern that when s is easier to extract than t, more evidence is required to stop using s.
Figure 4 shows the performance curves for BERT and T5 (with others the in Appendix A), i.e., use of the spurious feature (s-only error) as a function of the evidence from s-only examples seen in training (s-only example rate). Each line corresponds to a different s, t feature pair, and each data point is the test performance on a dataset with a given s-only example rate (which varies along the x-axis.) For pairs with high MDL ratios (i.e., when t is actually easier to extract than s), the model learns to solve the task “the right way” even when the training data provides no incentive to do so: That is, in such cases, the models’ decisions do not appear to depend on the spurious feature s even when s and the target feature t perfectly co-occur in the fine-tuning data.
Figure 4 shows that T5 (compared to BERT) requires more data to perform well. This may be because we fine-tuned T5 with a linear classification head, rather than the text-only output on which it was pre-trained. We made this decision 1) because we had trouble training T5 in the original manner, and 2) using a linear classification head was consistent with the other model architectures.
5 DISCUSSION
Our experimental results provide support for our hypothesis: The relative extractability of features given an input representation (as measured by information-theoretic probing tech-
niques) is predictive of the decisions a trained model will make in practice. In particular, we see evidence that models will tend to use imperfect features that are more readily extractable over perfectly predictive features that are harder to extract. This insight is highly related to prior work which has shown, e.g., that neural networks learn “easy” examples before they learn “hard” examples (Mangalam & Prabhu, 2019). Our findings additionally connect to new probing techniques which have received significant attention in NLP but have yet to be connected to explanations of or predictions about state-of-the-art models’ decisions in practice.
Fine-tuning may not uncover new features. The models are capable of learning both the s and t features in isolation, so our experiments show that if the relative extractibility is highly skewed, one feature may hide the other – a fine-tuned model may not use the harder-to-extract feature. This suggests a pattern that seems intuitive but is in fact non-trivial: If one classifier does not pick up on a feature readily enough, another classifier (or, rather, the same classifier trained with different data) may not be sensitive to that feature at all. This has ramifications for how we view fine-tuning, which is generally considered to be beneficial because it allows models to learn new, task-relevant features. Our findings suggest that if the needed feature is not already extractable-enough after pretraining, fine-tuning may not have the desired effect.
Probing classifiers can be viewed as measures of a pre-trained representation’s inductive biases. Analysis with probing classifiers has primarily focused on whether important linguistic features can be decoded from representations at better-than-baseline rates, but there has been little insight about what it would mean for a representations’ encoding of a feature to be “sufficient”. Based on these experiments, we argue that a feature is “sufficiently” encoded if it is as available to the model as are surface features of the text. For example, if a fine-tuned model can access features about a word’s semantic role as easily as it can access features about that word’s lexical identity, the model may need little (or no) explicit training signal to prefer a decision rule based on the former structural feature. The desire for models with such behavior motivates the development of architectures with explicit inductive biases (e.g., TreeRNNs). Evidence that similar generalization behavior
can result from pre-trained representations has exciting implications for those interested in sample efficiency and cognitively-plausible language learning (Warstadt & Bowman, 2020; Linzen, 2020). We note that this work has not established that the relationship between extractability and feature use is causal. This could be explored using intermediate task training (Pruksachatkun et al., 2020) in order to influence the extractability of features prior to fine-tuning for the target task; e.g., Merchant et al. (2020) suggests fine-tuning on parsing might improve the extractability of syntactic features.
6 RELATED WORK
Significant prior work analyzes the representations and behavior of pre-trained LMs. Work using probing classifiers (Veldhoen et al., 2016; Adi et al., 2017; Conneau et al., 2018; Hupkes et al., 2018) suggests that such models capture a wide range of relevant linguistic phenomena (Hewitt & Manning, 2019; Bau et al., 2019; Dalvi et al., 2019; Tenney et al., 2019a;b). Similar techniques include attention maps/visualizations (Voita et al., 2019; Serrano & Smith, 2019), and relational similarity analyses (Chrupała & Alishahi, 2019). A parallel line of work uses challenge sets to understand model behavior in practice. Some works construct evaluation sets to analyze weaknesses in the decision procedures of neural NLP models (Jia & Liang, 2017b; Glockner et al., 2018; Dasgupta et al., 2018; Gururangan et al., 2018; Poliak et al., 2018b; Elkahky et al., 2018; Ettinger et al., 2016; Linzen et al., 2016; Isabelle et al., 2017; Naik et al., 2018; Jia & Liang, 2017a; Linzen et al., 2016; Goldberg, 2019, and others). Others use such datasets to improve models’ handling of linguistic features (Min et al., 2020; Poliak et al., 2018a; Liu et al., 2019a), or to mitigate biases (Zmigrod et al., 2019; Zhao et al., 2018; 2019; Hall Maudslay et al., 2019; Lu et al., 2020). Nie et al. (2020) and Kaushik et al. (2020) explore augmenting training sets with human-in-the-loop methods.
Our work is related to work on generalization of neural NLP models. Geiger et al. (2019) discusses ways in which evaluation tasks should be sensitive to models’ inductive biases and Warstadt & Bowman (2020) discusses the ability of language model pre-training to encode such inductive biases. Work on data augmentation (Elkahky et al., 2018; Min et al., 2020; Zmigrod et al., 2019) is relevant, as the approach relies on the assumption that altering the training data distribution (analogous to what we call s-only rate in our work) will improve model behavior in practice. Kodner & Gupta (2020); Jha et al. (2020) discuss concerns about ways in which such approaches can be counterproductive, by introducing new artifacts. Work on adversarial robustness (Ribeiro et al., 2018; Iyyer et al., 2018; Hsieh et al., 2019; Jia et al., 2019; Alzantot et al., 2018; Hsieh et al., 2019; Ilyas et al., 2019; Madry et al., 2017; Athalye et al., 2018) is also relevant, as it relates to the influence of dataset artifacts on models’ decisions. A still larger body of work studies feature representation and generalization in neural networks outside of NLP. Mangalam & Prabhu (2019) show that neural networks learn “easy” examples (as defined by shallow machine learning model performance) before they learn “hard” examples. Zhang et al. (2016) and Arpit et al. (2017) show that neural networks which are capable of memorizing noise nonetheless acheive good generalization performance, suggesting that such models might have an inherent preference to learn more general features. Finally, ongoing theoretical work characterizes the ability of over-parameterized networks to generalize in terms of complexity (Neyshabur et al., 2019) and implicit regularization (Blanc et al., 2020).
Concurrent work (Warstadt et al., 2020b) also investigates the inductive biases of large pre-trained models (RoBERTa), in particular, they ask when (at what amount of pre-training data) such models shift from a surface feature (what we call spurious features) to a linguistic feature (what we call a target feature). In our work, we focus on how to predict which of these two biases characterize the model (via relative MDL).
7 CONCLUSION
This work bears on an open question in NLP, namely, the question of how models’ internal representations (as measured by probing classifiers) influence model behavior (as measured by challenge sets). We find that the feature extractability can be viewed as an inductive bias: the more extractable a feature is after pre-training, the less statistical evidence is required in order for the model to adopt the feature during fine-tuning. Understanding the connection between these two measurement techniques can enable more principled evaluation of and control over neural NLP models.
ACKNOWLEDGEMENTS
We would like to thank Michael Littman for helpful suggestions on how to better present our findings and Ian Tenney for insightful comments on a previous draft of this work. We also want to thank our reviewers were their detailed and helpful comments. This work is supported by DARPA under grant number HR00111990064. This research was conducted using computational resources and services at the Center for Computation and Visualization, Brown University.
A ADDITIONAL RESULTS
Figure 6, 7, 8, 9, 10 show additional results for all models over all partitions (both accuracy, neither accuracy, and F-score). These charts appear at the end of the Appendix.
Details on the MDL statistics are available in Table 3.
A.1 BEYOND ACCURACY?
For the transformer models, for 18/20 feature pairs, the models are able to solve all the spurious and target features in isolation during probing. (They do solve the test set in all cases–its that two
of the spurious features ended up being very difficult for the models.) During the reviews, we did not control for the cases where the model did not solve the probing task. These 2 extraneous points accentuate the lineplot curves, but do not change the character of the results (nor much adjust the correlations). In the paper, now, we control for accuracy by filtering out these cases. With or without this control, the accuracy provides no predictive power about the inductive biases. We present the correlations without filtering for these cases for consistency with the reviews (Table 4 above); we believe it is important to control for these cases because they could have acted as giveaways, where even accuracy might have worked.
A.2 ALTERNATE METRIC: s-RATE?
We initially used a different metric when computing the correlations to compact the lineplots. Rather than using the average test performance, we looked at the evidence required for the model to solve the test set. Both of these metrics conceptually capture what we are interested in, but the new one (simply averaging test performance) is much easier to understand, and captures the performance across all partitions. Here we report the correlations with this evidence required metric instead, which we called s-rate?. Specifically, we defined it to be: s-rate? is the lowest s-only example rate at which the fine-tuned model achieves essentially perfect performance (F-score> 0.99) (see Figure 5a). Intuitively, s-rate? is the (observed) minimum amount of evidence from which the model infer that t alone is predictive of the label. See Table 5b for the results.
s-rate★ Threshold: 0.99
(a) Evidence Required: s-rate?
Absolute Relative (t to s) Target Spurious Ratio Diff
BERT 0.68* -0.63* -0.79* -0.81* RoBERTa 0.04 -0.69* -0.71* -0.69* T5 0.81* -0.03 -0.55* -0.65* GPT2 -0.11 -0.24 -0.29 -0.32 GloVe 0.29 -0.38 -0.48 -0.48
(b) Using Evidence Required (s-rate?) instead of Average Fscore. The correlations are negative instead of positive: As the extractibility increases, less evidence is required for the model to perform well.
A.3 RESULTS USING AUC INSTEAD OF MDL
See Table 5. A metric similar to MDL for capturing the same intuition is the area under the validation loss curve (AUC). This metric is highly related to online MDL in computation.
B IMPLEMENTATION DETAILS & REPRODUCIBLITY
Our code is available at: https://github.com/cjlovering/predicting-inductive-biases.
There are two major parts to this project in terms of reproducibility: (1) the data and (2) the model implementations. We describe the templates for the data below in Appendix D – the full details are in the project source. For the transformer models, we use Hugging Face for the implementations and access to the pre-trained embeddings (Wolf et al., 2020). We use PyTorch Lightning to organize the training code (Falcon, 2019). We fix all hyperparameters, which are reported in Table 6.
We want to call attention to BERT requiring much less data than T5 to capture our target features. At face value, it seems that BERT requires much less data than T5 to capture our target features. However, we are wary about making such strong claims. Something to consider here (noted in the Appendix B is that for T5 we used a linear model rather than formatting the task in text (which is how T5 is trained). We made this decision (1) because we had trouble training T5 in this purely textual manner, and (2) using a linear classification head over two classes is consistent with the other model architectures. Again, GPT2 and RoBERTa performed on par with BERT, so the difference between the performance of BERT and T5 may be due to how we trained T5.
C MEASURING EXTRACTABILITY INDIRECTLY
We measure the MDL for t with both and s-only examples. In the simulated setting we can compare this approach with measuring the MDL directly (t-only vs neither). See Table 7 for MDL results. The ordering of the feature’s difficulty holds across the two methods.
D TEMPLATES FOR NATURALISTIC DATA
Each template corresponds to a combination of target features, grammars, and spurious features (the target and spurious features are discussed in Section 4.1). See Table 8 for a complete list of templates. See Table 10 for further details about the templates that are used for each of the the target features. Complete details about implementation of these templates (and all data) will be released upon acceptance.
E WHY EXACTLY IS IT HARD TO GENERATE T-ONLY EXAMPLES?
Target features may be unavoidably linked to spurious ones. For example, for a Negative Polarity Item to be licensed (perhaps smoothing over some intricacies) the NPI (“any”, “all”, etc) must be a downward entailing context. These downward entailing contexts are created by triggers, e.g., if a negative word like “no” or “not” or a quantifier like “some”. Linguists who study the problem have assembled a list of such triggers (see Hoeksema (2008)). Arguably, one cannot write down a correct example of NLP licensing that doesn’t contain one of these memorizable triggers. Thus, we cannot train or test models on correct examples of NPI usage while simultaneously preventing it from having access to trigger-specific features.
Similar to the NPI example, it’s not possible (to our knowledge) to construct target-only examples for filler-gap since construction requires a wh-word and syntactic gap; thus, we can’t create a positively labeled, grammatical sentence that exhibits a Filler Gap without these elements.
In summary, target-only examples may add new spurious features (as with NPI), or be impossible to construct because the presence of the target feature implies the presence of the spurious feature (as with filler gaps). Still, our setup permits the MDL to be computed directly with target-only examples, and so, in cases where it is feasible to create target-only examples (e.g. the Subject-Verb Agreement templates), it would have bolstered our argument to do so.
F MDL ISSUES: OVERFITTING IN THE SYNTHETIC EXPERIMENTS
We found that the MDL exceeds the uniform code length in some of the synthetic experiments. We found that this occurs because the model overfits on the small early-block sizes. See Figure 11. | 1. What is the main contribution of the paper regarding feature extraction and fine-tuned models?
2. What are the strengths of the proposed approach, particularly in its experimental design and connection to previous works?
3. Do you have any concerns or reservations about the paper's assumptions or claims?
4. How does the reviewer assess the paper's relevance to NLP interpretability and its potential applications?
5. Are there any suggestions or recommendations for improving the paper's methodology or discussion? | Review | Review
Summary
This paper studies the relationship between extractability of features from pre-trained representations and how much a fine-tuned model uses that feature. The extractability of features is measured by the minimum description length of a probing classifier trained to detect the feature from the pre-trained representations (using the online code version of Voita and Titov). The degree to which a fine-tuned model uses the feature is measured by the amount of evidence required for a model to tease apart spurious from non-spurious features (called "target" features). Evidence here means examples where a spurious feature occurs but a non-spurious feature does not occur. When there are many such examples (high spurious-only rate), it is easier for a model to reject the spurious feature and learn to rely on the target feature. The "degree to which a fine-tuned model uses a feature" is defined as the minimal spurious-only rate at which the model can accomplish the task.
The paper has two kinds of experiments, on synthetic and more natural data. The synthetic data are sequences of symbols where the task is to identify simple properties like occurrence or repetition of symbols. The experiments are set up such that varying rates of spurious-only examples are presented during training, providing increasing amounts of evidence against the spurious feature (presence of the symbol 2) and in favor of the target feature. The target feature is identical to the label, that is, it is 1 when the example corresponds to the label and 0 otherwise. The paper reports extractability of the spurious and target features via the MDL of a probing classifier. The metric of interest is the relative MDL, where higher means the feature is more extractable. When the features are more extractable, less evidence is required for the model to reject spurious features. With less extractable features, more evidence is required.
The natural language examples are made with acceptability judgements of examples generated by grammars for three linguistic phenomena (subject-verb agreement, negative polarity items, and filler gap dependencies). Here again the setting is similar, modulu a tweak on how to calculate extractability. The main result here is high (negative) correlation between extractability and evidence required for rejecting the spurious feature.
Main comments
This paper fills an important gap in the NLP interpretability literature that has recently been a cause of concern in the community. On the one hand, probing classifiers tell us something about the existence (and more recently the extractability) of properties in pre-trained models' representations. But they do not tell us whether a model uses those properties. One the other hand, many challenge sets and test suites tell us whether a model can successfully perform a task requiring some linguistic property. The paper aims to connect these two aspects, and it does so quite convincingly, although I have some reservations below.
The experimental setup is well designed. The use of synthetic data allows a fairly clean setup where spurious and non-spurious features are distinct and simple. The experiments of training with increasing amount of spurious-only examples are instructive.
The natural language examples are important as they go beyond synthetic data and closer to a naturalistic scenario. However, these are still templatic sentences and synthetic in a sense. I wouldn't call these naturalistic examples. Ideally, experiments on naturally occurring data would be more convincing. Or, at the very least a discussion of this issue should be made.
The paper makes use of recent advances in interpretability work, including information-theoretic probing, and draws connections to a broad range of related work.
The assumption that the extent to which a model uses a feature can be measured by the spurious-only error rate (at some spurious-only occurrence rate) is questionable in my opinion. In a very clean setting like the synthetic data, I could maybe accept it. But, "using" is in fact a causal concept, while a causal mechanism has not been demonstrated. The paper alludes to this point in the discussion, but I think the discussion around this point should be expanded, and the strong claims should be rephrased or modulated.
Questions and other comments
The paper makes the assumption that the target feature t and the label are the same. I am not convinced about the "without loss of generality" claim. In practice, it is not easy to isolate a feature t that is identical with the label. How would this assumption affect the generalization of the approach to more realistic scenarios?
The task is a binary classification task. The features holding is also binary, that is either a feature holds (1) or not (0). But, suppose the label is 0, then the t feature is also 0, meaning it does not hold. This seems contrary to what is meant. This could be a confusion on my part.
Synthetic data
Why is MDL computed by training a classifier to distinguish s-only from neither, and not from some other part of S?
Footnote 3 is concerning - Aren't MDLs higher than a uniform code meaningless?
The classifier is not so simple (LSTM + 1-layer MLP). Why is that? How does the identity of the classifier affects the results?
The both-error subplot in figure 2 shows a slight increase in error rate with large s-only rate. Does this mean that the model has (falsely) learned to reject the example when s is in it? That is, it has learned another spurious feature, just in the other direction, instead of learning to rely on t.
A similar pattern is found in the t-only error subplot. There, even with high s-only rate, the models don't classify t-only examples correctly. I wonder why this plot is different from the s-only error plot, as this shows a directional behavior. Some discussion of this would be useful.
There seems to be a stark contrast between the contains-1 feature and the other three, both in terms of MDL and in figure 2. Is it possible to show a more gradual behavior between the two extremes?
Natural language examples
Why are the training sets so small? How does this affect MDL numbers and their validity? Apropos footnote 1.
Why exactly is it hard to generate t-only examples? The appendix is indeed helpful in making sure the MDL(t) calculation method is legit, but more clarification around this issue would be good.
Here, s-only error is used as "the use of the spurious feature", but this is only one aspect in which a model may make use of s. It may be that a model makes more complicated use of s, when s is found in combination with t. The discussion touches upon this point by acknowledging that the work does not establish a causal relationship between extractability and feature use. I'd go even further and say that "feature use" should be defined in causal terms.
What is the performance (F-score) for determining s-rate*? is that the performance on s-only examples? on other examples? Why the shift to F-score now?
Discuss y-axis differences in figure 3c. BERT needs much less evidence than (some cases of) GloVe and T5. How does that impact the analysis?
The term "learning curves" for figure 4 is confusing: those aren't results during training, right? They are results after training, each time with a different rate of s-only examples. |
ICLR | Title
Covariance Matrix Adaptation MAP-Annealing
Abstract
Single-objective optimization algorithms search for the single highest-quality solution with respect to an objective. Quality diversity (QD) algorithms, such as Covariance Matrix Adaptation MAP-Elites (CMA-ME), search for a collection of solutions that are both high-quality with respect to an objective and diverse with respect to specified measure functions. However, CMA-ME suffers from three major limitations highlighted by the QD community: prematurely abandoning the objective in favor of exploration, struggling to explore flat objectives, and having poor performance for low-resolution archives. We propose a new quality diversity algorithm, Covariance Matrix Adaptation MAP-Annealing (CMA-MAE), that addresses all three limitations. We provide theoretical justifications for the new algorithm with respect to each limitation. Our theory informs our experiments, which support the theory and show that CMA-MAE achieves state-of-the-art performance.
1 INTRODUCTION
Consider an example problem of searching for celebrity faces in the latent space of a generative model. As a single-objective optimization problem, we specify an objective f that targets a celebrity such as Tom Cruise. A single-objective optimizer, such as CMA-ES (Hansen, 2016), will converge to a single solution of high objective value, an image that looks like Tom Cruise as much as possible.
However, this objective has ambiguity. How old was Tom Cruise in the photo? Did we want the person in the image to have short or long hair? By instead framing the problem as a quality diversity optimization problem, we additionally specify a measure function m1 that quantifies age and a measure function m2 that quantifies hair length. A quality diversity algorithm (Pugh et al., 2015; Chatzilygeroudis et al., 2021), such as CMA-ME (Fontaine et al., 2020), can then optimize for a collection of images that are diverse with respect to age and hair length, but all look like Tom Cruise.
While previous work (Fontaine et al., 2020; 2021a;b; Earle et al., 2021) has shown that CMA-ME solves such QD problems efficiently, three important limitations of the algorithm have been discovered. First, on difficult to optimize objectives, variants of CMA-ME will abandon the objective too soon (Tjanaka et al., 2022), and instead favor exploring the measure space, the vector space defined by the measure function outputs. Second, the CMA-ME algorithm struggles to explore flat objective functions (Paolo et al., 2021). Third, CMA-ME works well on high-resolution archives, but struggles to explore low-resolution archives (Cully, 2021; Fontaine & Nikolaidis, 2021a). We note that the chosen archive resolution affects the performance of all current QD algorithms.
We propose a new algorithm, CMA-MAE, that addresses these three limitations.
To address the first limitation, we derive an algorithm that smoothly blends between CMA-ES and CMA-ME. First, consider how CMA-ES and CMA-ME differ. At each step CMA-ES’s objective ranking maximizes the objective function f by approximating the natural gradient of f at the current solution point (Akimoto et al., 2010). In contrast, CMA-ME’s improvement ranking moves in the direction of the natural gradient of f − fA at the current solution point, where fA is a discount function equal to the objective of the best solution so far that has the same measure values as the current solution point. The function f − fA quantifies the gap between a candidate solution and the best solution so far at the candidate solution’s position in measure space.
Our key insight is to anneal the function fA by a learning rate α. We observe that when α = 0, then our discount function fA never increases and our algorithm behaves like CMA-ES. However, when
α = 1, then our discount function always maintains the best solution for each region in measure space and our algorithm behaves like CMA-ME. For 0 < α < 1, CMA-MAE smoothly blends between the two algorithms’ behavior, allowing for an algorithm that spends more time on the optimization of f before transitioning to exploration. Figure 1 is an illustrative example of varying the learning rate α.
Our proposed annealing method naturally addresses the flat objective limitation. Observe that both CMA-ES and CMA-ME struggle on flat objectives f as the natural gradient becomes 0 in this case and each algorithm will restart. However, we show that, when CMA-MAE optimizes f − fA for 0 < α < 1, the algorithm becomes a descent method on the density histogram defined by the archive.
Finally, CMA-ME’s poor performance on low resolution archives is likely caused by the nonstationary objective f − fA changing too quickly for the adaptation mechanism to keep up. Our archive learning rate α controls how quickly f − fA changes. We derive a conversion formula for α that allows us to derive equivalent α for different archive resolutions. Our conversion formula guarantees that CMA-MAE is the first QD algorithm invariant to archive resolution.
Overall, our work shows how a simple algorithmic change to CMA-ME addresses all three major limitations affecting CMA-ME’s performance and robustness. Our theoretical findings justify the aforementioned properties and inform our experiments, which show that CMA-MAE outperforms state-of-the-art QD algorithms and maintains robust performance across different archive resolutions.
2 PROBLEM DEFINITION
Quality Diversity. We adopt the quality diversity (QD) problem definition from Fontaine & Nikolaidis (2021a). A QD problem consists of an objective f : Rn → R that maps n-dimensional solution parameters to a scalar value denoting the quality of the solution and k measures mi : Rn → R or, as a vector function, m : Rn → Rk that quantify behavior or attributes of each solution1. The range of m forms a measure space S = m(Rn). The QD objective is to find a set of solutions θ ∈ Rn, such that m(θ) = s for each s in S and f(θ) is maximized.
The measure space S is continuous, but solving algorithms need to produce a finite collection of solutions. Therefore, QD algorithms in the MAP-Elites (Mouret & Clune, 2015; Cully et al., 2015) family relax the QD objective by discretizing the space S. Given T as the tessellation of S into M cells, the QD objective becomes to find a solution θi for each of the i ∈ {1, . . . ,M} cells, such that each θi maps to the cell corresponding to m(θi) in the tesselation T . The QD objective then becomes maximizing the objective value f(θi) of all cells:
max M∑ i=1 f(θi) (1)
The differentiable quality diversity (DQD) problem (Fontaine & Nikolaidis, 2021a) is a special case of the QD problem where both the objective f and measures mi are first-order differentiable.
1In agent-based settings, such as reinforcement learning, the measure functions are sometimes called behavior functions and the outputs of each measure function are called behavioral characteristics or behavior descriptors.
CMA-ES CMA-MECMA-MAE
3 PRELIMINARIES
We present several QD algorithms that solve derivative-free QD problems to provide context for our proposed CMA-MAE algorithm. Appendix D contains information about the DQD algorithm CMA-MEGA, which solves problems where exact gradient information is available.
MAP-Elites and MAP-Elites (line). The MAP-Elites QD algorithm produces an archive of solutions, where each cell in the archive corresponds to the provided tesselation T in the QD problem definition. The algorithm initializes the archive by sampling solutions from the solution space Rn from a fixed distribution. After initialization, MAP-Elites produces new solutions by selecting occupied cells uniformly at random and perturbing them with isotropic Gaussian noise: θ′ = θi + σN (0, I). For each new candidate solution θ′, the algorithm computes an objective f(θ′) and measures m(θ′). MAP-Elites places θ′ into the archive if the cell corresponding to m(θ′) is empty or θ′ obtains a better objective value f(θ′) than the current occupant. The MAP-Elites algorithm results in an archive of solutions that are diverse with respect to the measure function m, but also high quality with respect to the objective f . Vassiliades & Mouret (2018) proposed the MAP-Elites (line) algorithm by augmenting the isotropic Gaussian perturbation with a linear interpolation between two solutions θi and θj : θ′ = θi + σ1N (0, I) + σ2N (0, 1)(θi − θj). CMA-ME. Covariance Matrix Adaptation MAP-Elites (CMA-ME) (Fontaine et al., 2020) combines the archiving mechanisms of MAP-Elites with the adaptation mechanisms of CMA-ES Hansen (2016). Instead of perturbing archive solutions with Gaussian noise, CMA-ME maintains a multivariate Gaussian of search directionsN (0,Σ) and a search point θ ∈ Rn. The algorithm updates the archive by sampling λ solutions around the current search point θi ∼ N (θ,Σ). After updating the archive, CMA-ME ranks solutions via a two stage ranking. Solutions that discover a new cell are ranked by the objective ∆i = f(θi), and solutions that map to an occupied cell e are ranked by the improvement over the incumbent solution θe in that cell: ∆i = f(θi)− f(θe). CMA-ME prioritizes exploration by ranking all solutions that discover a new cell before all solutions that improve upon an existing cell. Finally, CMA-ME moves θ towards the largest improvement in the archive, according to the CMA-ES update rules. Fontaine & Nikolaidis (2021a) showed that the improvement ranking of CMA-ME approximates a natural gradient of a modified QD objective (see Eq. 1).
4 PROPOSED ALGORITHMS
We present the CMA-MAE algorithm. While we focus on CMA-MAE, the same augmentations apply to CMA-MEGA to form the novel CMA-MAEGA algorithm (see Appendix D).
CMA-MAE. CMA-MAE is an algorithm that adjusts the rate the objective f − fA changes. First, consider at a high level how CMA-ME explores the measure space and discovers high quality solutions. The CMA-ME algorithm maintains a solution point θ and an archive A with previously discovered solutions. When CMA-ME samples a new solution θ′, the algorithm computes the solution’s objective value f(θ′) and maps the solution to a cell e in the archive based on the measure m(θ′). CMA-ME then computes the improvement of the objective value f(θ′) of the new solution, over a discount function fA : Rn → R. In CMA-ME, we define fA(θ′) by computing the cell e in
the archive corresponding to m(θ′) and letting fA(θ′) = f(θe), where θe is the incumbent solution of cell e. The algorithm ranks candidate solutions by improvement f(θ′)− fA(θ′) = f(θ′)− f(θe) and moves the search in the direction of higher ranked solutions.
Assume that CMA-ME samples a new solution θ′ with a high objective value of f(θ′) = 99. If the current occupant θe of the corresponding cell has a low objective value of f(θe) = 0.3, then the improvement in the archive ∆ = f(θ′) − f(θe) = 98.7 is high and the algorithm will move the search point θ towards θ′. Now, assume that in the next iteration the algorithm discovers a new solution θ′′ with objective value f(θ′′) = 100 that maps to the same cell as θ′. The improvement then is ∆ = f(θ′′)− f(θ′) = 1 as θ′ replaced θe in the archive in the previous iteration. CMA-ME would likely move θ away from θ′′ as the solution resulted in low improvement. In contrast, CMA-ES would move towards θ′′ as it ranks only by the objective f , ignoring previously discovered solutions with similar measure values.
In the above example, CMA-ME moves away from high performing solutions in order to maximize how the archive changes. However, in domains with hard-to-optimize objective functions, it is beneficial to perform more optimization steps in high-performing regions (Tjanaka et al., 2022).
Like CMA-ME, CMA-MAE maintains a discount function fA(θ′) and ranks solutions by improvement f(θ′)− fA(θ′). However, instead of setting fA(θ′) equal to f(θe), we set fA(θ′) = te, where te is an acceptance threshold maintained for each cell in the archive A. When adding a candidate solution to the archive, we control the rate that te changes by the archive learning rate α as follows: te ← (1− α)te + αf(θ′). The archive learning rate α in CMA-MAE allows us to control how quickly we leave a highperforming region of measure space. For example, consider discovering solutions in the same cell with objective value 100 in 5 consecutive iterations. The improvement values computed by CMA-ME would be 100, 0, 0, 0, 0, thus CMA-ME would move rapidly away from this cell. The improvement values computed by CMA-MAE with α = 0.5 would diminish smoothly as follows: 100, 50, 25, 12.5, 6.25, enabling further exploitation of the high-performing region.
Next, we walk through the CMA-MAE algorithm step-by-step. Algorithm 1 shows the pseudo-code for CMA-MAE with the differences from CMA-ME highlighted in yellow. First, on line 2 we initialize the acceptance threshold to minf . In each iteration we sample λ solutions around the current search point θ (line 5). For each candidate solution θi, we evaluate the solution and compute the objective value f(θi) and measure values m(θi) (line 6). Next, we compute the cell e in the archive that corresponds to the measure values and the improvement ∆i over the current threshold te (lines 7-8). If the objective crosses the acceptance threshold te, we replace the incumbent θe in the archive and increase the acceptance threshold te (lines 9-11). Next, we rank all candidate solutions θi by their improvement ∆i. Finally, we step our search point θ and adapt our covariance matrix Σ towards the direction of largest improvement (lines 14-15) according to CMA-ES’s update rules (Hansen, 2016).
CMA-MAEGA. We note that our augmentations to the CMA-ME algorithm only affects how we replace solutions in the archive and how we calculate ∆i. CMA-ME and CMA-MEGA replace solutions and calculate ∆i identically, so we apply the same augmentations to CMA-MEGA to form a new DQD algorithm, CMA-MAEGA, in Appendix D.
5 THEORETICAL PROPERTIES OF CMA-MAE
We provide insights about the behavior of CMA-MAE for different α values. We include all proofs in Appendix E. CMA-MAEGA has similar theoretical properties discussed in Appendix F.
Theorem 5.1. The CMA-ES algorithm is equivalent to CMA-MAE when α = 0, if CMA-ES restarts from an archive solution.
The next theorem states that CMA-ME is equivalent to CMA-MAE when α = 1 with the following caveats: First, we assume that CMA-ME restarts only by the CMA-ES restart rules, rather than the additional “no improvement” restart rule in prior work (Fontaine et al., 2020). Second, we assume that both CMA-ME and CMA-MAE leverage µ selection (Hansen, 2016) rather than filtering selection (Fontaine et al., 2020).
Algorithm 1 Covariance Matrix Adaptation MAP-Annealing (CMA-MAE) CMA-MAE (evaluate,θ0, N, λ, σ, minf , α)
input : An evaluation function evaluate that computes the objective and measures, an initial solution θ0, a desired number of iterations N , a branching population size λ, an initial step size σ, a minimal acceptable solution quality minf , and an archive learning rate α.
result :Generate Nλ solutions storing elites in an archive A. 1 Initialize solution parameters θ to θ0, CMA-ES parameters Σ = σI and p, where we let p be the CMA-ES internal parameters. 2 Initialize the archive A and the acceptance threshold te with minf for each cell e. 3 for iter ← 1 to N do 4 for i← 1 to λ do 5 θi ∼ N (θ,Σ) 6 f,m← evaluate(θi) 7 e← calculate_cell(A,m) 8 ∆i ← f − te 9 if f > te then
10 Replace the current occupant in cell e of the archive A with θi 11 te ← (1− α)te + αf 12 end 13 end 14 rank θi by ∆i 15 Adapt CMA-ES parameters θ,Σ,p based on improvement ranking ∆i 16 if CMA-ES converges then 17 Restart CMA-ES with Σ = σI . 18 Set θ to a randomly selected existing cell θi from the archive 19 end 20 end
Theorem 5.2. The CMA-ME algorithm is equivalent to CMA-MAE when α = 1 and minf is an arbitrarily large negative number.
We next provide theoretical insights on how the discount function fA smoothly increases from a constant function minf to the discount function used by CMA-ME, as α increases from 0 to 1. We focus on the special case of a fixed sequence of candidate solutions. Theorem 5.3. Let αi and αj be two archive learning rates for archives Ai and Aj such that 0 ≤ αi < αj ≤ 1. For two runs of CMA-MAE that generate the same sequence of m candidate solutions {S} = θ1,θ2, ...,θm, it follows that fAi(θ) ≤ fAj (θ) for all θ ∈ Rn.
Finally, we wish to provide insights about the exploration properties of CMA-MAE for an archive learning rate α between 0 and 1, when the objective f is constant. Consider an approximate density descent algorithm that is identical to CMA-ME, but differs by how solutions are ranked. Specifically, we assume that this algorithm maintains a density histogram of the occupancy counts oe for each cell e, with oe representing the number of times a solution was generated in that cell. This algorithm descends the density histogram by ranking solutions based on the occupancy count of the cell that the solution maps to, where solutions that discover less frequently visited cells are ranked higher. Theorem 5.4. The CMA-MAE algorithm optimizing a constant objective function f(θ) = C for all θ ∈ Rn is equivalent to the approximate density descent algorithm, when 0 < α < 1 and minf < C.
While Theorem 5.4 assumes a constant objective f , we conjecture that the theorem holds true generally when threshold te in each cell e approaches the local optimum within the cell boundaries.
6 EXPERIMENTS
We compare the performance of CMA-MAE with the state-of-the-art QD algorithms MAP-Elites, MAP-Elites (line), and CMA-ME, using existing Pyribs (Tjanaka et al., 2021) QD library implementations. We set α = 0.01 for CMA-MAE and include additional experiments for varying α
in section 7. Because annealing methods replace solutions based on the threshold, we retain the best solution in each cell for comparison purposes. We include additional comparisons between CMA-MEGA and CMA-MAEGA – the gradient-based counterpart of CMA-MAE – in Appendix K.
We select the benchmark domains from Fontaine & Nikolaidis (2021a): linear projection (Fontaine et al., 2020), arm repertoire (Cully & Demiris, 2017), and latent space illumination (Fontaine et al., 2021b). To evaluate the good exploration properties of CMA-MAE on flat objectives, we introduce a variant of the linear projection domain to include a “plateau” objective function that is constant everywhere for solutions within a fixed range and has a quadratic penalty for solutions outside the range. We describe the domains in detail in Appendix B.
6.1 EXPERIMENT DESIGN
Independent Variables. We follow a between-groups design with two independent variables: the algorithm and the domain.
Dependent Variables. We use the sum of f values of all cells in the archive, defined as the QD-score Pugh et al. (2015), as a metric for the quality and diversity of solutions. Following Fontaine & Nikolaidis (2021a), we normalize the QD-score metric by the archive size (the total number of cells from the tesselation of measure space) to make the metric invariant to archive resolution. We additionally compute the coverage, defined as the number of occupied cells in the archive divided by the total number of cells.
6.2 ANALYSIS
Table 1 shows the QD-score and coverage values for each algorithm and domain, averaged over 20 trials for the linear projection (LP) and arm repertoire domains and over 5 trials for the LSI domain. Fig. 3 shows the QD-score values for increasing number of iterations and example archives for CMA-MAE and CMA-ME, with 95% confidence intervals.
We conducted a two-way ANOVA to examine the effect of the algorithm and domain (LP (sphere), LP (Rastrigin), LP (plateau), arm repertoire, and LSI) on the QD-score. There was a significant interaction between the search algorithm and the domain (F (12, 320) = 1958.34, p < 0.001). Simple main effects analysis with Bonferroni corrections showed that CMA-MAE outperformed all baselines in all benchmark domains.
For the arm repertoire domain, we can compute the optimal archive coverage by testing whether each cell overlaps with a circle of radius equal to the maximum arm length (see Appendix B). We observe that CMA-MAE approaches the computed optimal coverage 80.24% for a resolution of 100× 100 and outperforms CMA-MEGA (Fontaine & Nikolaidis, 2021a) (see Appendix K).
These results show that the archive learning rate α is particularly beneficial for CMA-MAE. We observe that CMA-MAE initially explores regions of the measure space that have high-objective values. Once the archive becomes saturated, CMA-MAE reduces to approximate density descent, as we prove in Theorem 5.4 for flat objectives. On the other hand, CMA-ME does not receive any exploration signal when the objective landscape becomes flat, resulting in poor performance.
While our results show improved quantitative results on the LSI domain, Appendix I discusses how to improve the visual quality by leveraging techniques from the generative art community. Fig. 4 shows an example collage generated by adopting improvements for guiding StyleGAN with CLIP.
7 ON THE ROBUSTNESS OF CMA-MAE
Next, we present two studies that evaluate the robustness of CMA-MAE across two hyperparameters that may affect algorithm performance: the archive learning rate α and the archive resolution.
Archive Learning Rate. We examine the effect of different archive learning rates on the performance of CMA-MAE in the linear projection and arm repertoire domains. We vary the learning rate from 0 to 1 on an exponential scale, while keeping the resolution constant in each domain.
Table 2 shows that running CMA-MAE with the different 0 < α < 1 results in relatively similar performance, showing that CMA-MAE is fairly robust to α values. On the other hand, if α = 0 or α = 1 the performance drops drastically. Setting α = 1 results in very similar performance with CMA-ME, which supports our insight from Theorem 5.2.
Archive Resolution. As noted by Cully (2021) and Fontaine & Nikolaidis (2021a), quality diversity algorithms in the MAP-Elites family sometimes perform differently when run with different archive resolutions. For example, in the linear projection domain presented in Fontaine et al. (2020), CMA-ME outperformed MAP-Elites and MAP-Elites (line) for archives of resolution 500 × 500, while in this paper we observe that it performs worse for resolution 100 × 100. In this study, we investigate how CMA-MAE performs at different archive resolutions.
First, we note that the optimal archive learning rate α is dependent on the resolution of the archive. Consider as an example a sequence of solution additions to two archives A1 and A2 of resolution 100× 100 and 200× 200, respectively. A2 subdivides each cell in A1 into four cells, thus archive A2’s thresholds te should increase at a four times faster rate than A1. To account for this difference, we compute α2 for A2 via a conversion formula α2 = 1− (1− α1)r (see derivation in Appendix G), where r is the ratio of cell counts between archives A1 and A2. We initialize α1 = 0.01 for A1. In the above example, α2 = 1− (1− 0.01)4 = 0.0394.
Fig. 5 shows the QD-score of CMA-MAE with the resolution-dependent archive learning rate and the baselines for each benchmark domain. CMA-ME performs worse as the resolution decreases because the archive changes quickly at small resolutions, affecting CMA-ME’s adaptation mechanism. On the contrary, MAP-Elites and MAP-Elites (line) perform worse as the resolution increases due to having more elites to perturb. CMA-MAE’s performance is invariant to the resolution of the archive.
8 RELATED WORK
Quality Diversity Optimization. The predecessor to quality diversity optimization, simply called diversity optimization, originated with the Novelty Search algorithm (Lehman & Stanley, 2011a), which searches for a collection of solutions that are diverse in measure space. Later work introduced the Novelty Search with Local Competition (NSLC) (Lehman & Stanley, 2011b) and MAP-Elites (Cully et al., 2015; Mouret & Clune, 2015) algorithms, which combined single-objective optimization with diversity optimization and were the first QD algorithms. Since then, several QD algorithms have been proposed, based on a variety of single-objective optimization methods, such as Bayesian optimization (Kent & Branke, 2020), evolution strategies (Conti et al., 2018; Colas et al., 2020; Fontaine et al., 2020), differential evolution (Choi & Togelius, 2021), and gradient ascent (Fontaine & Nikolaidis, 2021a). Several works have improved selection mechanisms (Sfikas et al., 2021; Cully & Demiris, 2017), archives (Fontaine et al., 2019; Vassiliades et al., 2018; Smith et al., 2016), and perturbation operators (Vassiliades & Mouret, 2018; Nordmoen et al., 2018).
QD with Gradient Information. Several works combine gradient information with quality diversity optimization in ways that do not leverage the objective and measure gradients directly. For example, in model-based quality diversity optimization (Gaier et al., 2018; Hagg et al., 2020; Cazenille et al., 2019; Keller et al., 2020; Lim et al., 2021; Zhang et al., 2021; Gaier et al., 2020), Rakicevic et al. (2021) trains an autoencoder on the archive of solutions and leverages the Jacobian of the decoder network to compute the covariance of the Gaussian perturbation. In quality diversity reinforcement learning (QD-RL), several works (Parker-Holder et al., 2020; Pierrot et al., 2020; Nilsson & Cully, 2021; Tjanaka et al., 2022) approximate a reward gradient or diversity gradient via a critic network, action space noise, or evolution strategies and incorporate those gradients into a QD-RL algorithm.
Acceptance Thresholds. Our proposed archive learning rate α was loosely inspired by simulated annealing methods (Bertsimas & Tsitsiklis, 1993) that maintain an acceptance threshold that gradually becomes more selective as the algorithm progresses. The notion of an acceptance threshold is also closely related to minimal criterion methods in evolutionary computation (Lehman & Stanley, 2010; Brant & Stanley, 2017; 2020; Stanley et al., 2016). Our work differs by both 1) maintaining an acceptance threshold per archive cell rather than a global threshold and 2) annealing the threshold.
9 LIMITATIONS AND FUTURE WORK
Our approach introduced two hyperparameters, α and minf , to control the rate that f − fA changes. We observed that an α set strictly between 0 and 1 yields theoretical exploration improvements and that CMA-MAE is robust with respect to the exact choice of α. We additionally derived a conversion formula that converts an α1 for a specific archive resolution to an equivalent α2 for a different resolution. However, the conversion formula still requires practitioners to specify a good initial value of α1. Future work will explore ways to automatically initialize α, similar to how CMA-ES automatically assigns internal parameters (Hansen, 2016).
Quality diversity optimization is a rapidly growing branch of stochastic optimization with applications in generative design (Hagg et al., 2021; Gaier et al., 2020; 2018), automatic scenario generation in robotics (Fontaine & Nikolaidis, 2021c; Fontaine et al., 2021a; Fontaine & Nikolaidis, 2021b), reinforcement learning (Parker-Holder et al., 2020; Pierrot et al., 2020; Nilsson & Cully, 2021; Tjanaka et al., 2022), damage recovery in robotics (Cully et al., 2015), and procedural content generation (Gravina et al., 2019; Fontaine et al., 2021b; Zhang et al., 2021; Earle et al., 2021; Khalifa et al., 2018; Steckel & Schrum, 2021; Schrum et al., 2020; Sarkar & Cooper, 2021; Bhatt et al., 2022). Our paper introduces a new quality diversity algorithm, CMA-MAE. Our theoretical findings inform our experiments, which show that CMA-MAE addresses three major limitations affecting the CMA-ME algorithm, leading to state-of-the-art performance.
10 ETHICS STATEMENT
By controlling the trade-off between exploration and exploitation in QD algorithms, we aim towards improving their performance and robustness, thus making these algorithms easier to apply in a wide range of domains and applications. One promising application is synthetically extracting datasets from generative models to train machine learning algorithms Jahanian et al. (2021); Besnier et al. (2020). This can raise ethical considerations because generative models can reproduce and exacerbate existing biases in the datasets that they were trained on (Jain et al., 2020; Menon et al., 2020). On the other hand, quality diversity algorithms with carefully selected measure functions can target diversity with desired attributes, thus we hypothesize that they can be effective in generating balanced datasets. Furthermore, by attempting to find diverse solutions, QD algorithms are a step towards open-endedness in AI Stanley et al. (2017) and will often result in unexpected and often surprising emergent behaviors (Lehman et al., 2020). We recognize that this presents several challenges in predictability and monitoring of AI systems (Hendrycks et al., 2021), and we highlight the importance of future work on balancing the tradeoff between open-endedness and control (Ecoffet et al., 2020).
11 REPRODUCIBILITY STATEMENT
In the supplemental material we provide complete source code for all algorithms and experiments, as well as the Conda environments for installing project dependencies. The “README.md” document provides complete instructions both setup and execution of all experiments. In Appendix A we provide all hyperparameters. In Appendix B we provide domain-specific details for replicating all experimental domains. In Appendix C we provide information about the computational resources and hardware we used to run our experiments. In Appendix D we provide the pseudocode for the CMA-MAEGA algorithm, the DQD counterpart of CMA-MAE. In Appendix E we provide the proofs of all theorems in the paper. In Appendix F we provide the theoretical properties of CMA-MAEGA. In Appendix G we provide the derivation of the conversion formula for the archive learning rate. In Appendix H we provide a batch threshold update rule that is invariant to the order that the solutions are processes within a batch update. In Appendix I we discuss the implementation details for additional experiments that improve the quality of the generated images in the latent space illumination domain. In Appendix K we present all metrics with standard errors for each algorithm and domain.
APPENDIX
A HYPERPARAMETER SELECTION
For all domains we mirror the hyperparameter selection of Fontaine & Nikolaidis (2021a). For CMA-MAE and CMA-MAEGA, we duplicate the hyperparameter selections of CMA-ME and CMAMEGA, respectively. Following Fontaine et al. (2020), we run all algorithms with 15 emitters on the linear projection and arm repertoire domains. In the latent space illumination domain, we run experiments with only one emitter, due to the computational expense of the domain. Emitters are independent CMA-ES instances that run in parallel with a shared archive. For each algorithm, we select a batch size λ = 36 following Fontaine & Nikolaidis (2021a). For MAP-Elites and MAP-Elites (line), we initialize the archive with 100 random solutions, sampled from the distribution N (0, I). These initial solutions do not count in the evaluation budget for MAP-Elites and MAP-Elites (line). For algorithms in the CMA-ME family (CMA-ME, CMA-MAE, CMA-MEGA, and CMA-MAEGA), we initialize θ0 = 0 for every domain.
In our experiments we want to directly compare the ranking mechanisms of CMA-ME and CMA-MAE. However, CMA-ME is typically run with a “no improvement” restart rule, where the algorithm will restart if no solution changes the archive. Due to CMA-MAE’s annealed acceptance threshold te, a “no improvement” restart rule would cause CMA-ME and CMA-MAE to restart at different rates, confounding the effects of restarts and rankings. Filter selection also has a similar confounding effect as solutions are selected if they change the archive. For these reasons, in the main paper we run CMA-ME with a basic restart rule (CMA-ES style restarts only (Hansen, 2016)) and µ selection (Hansen, 2016) (selecting the top half of the ranking). In Appendix Section K, we run an extra CMA-ME with filter selection and the “no improvement” restart rule, which we denote CMA-ME*. We include, as an additional baseline, a configuration of CMA-ME that mixes emitters that optimize only for the objective with emitters that optimize for improvement, a configuration first studied by Cully (2021). We refer to this configuration as CMA-ME (imp, opt).
In the latent space illumination domain, due to the computational expense of the domain, we compare directly against the results from Fontaine & Nikolaidis (2021a), where we obtained the data (MIT license) with consent from the authors. For CMA-MAE and CMA-MAEGA we include the “no improvement” restart rule to match CMA-ME and CMA-MEGA as closely as possible. For this domain, we take gradient steps with the Adam optimizer (Kingma & Ba, 2015), following the recommendation of Fontaine & Nikolaidis (2021a). However, we run CMA-MAE with µ selection, since we found that small values of the archive learning rate α makes filter selection worse.
In Appendix I, we describe a second LSI experiment on StyleGAN2 (Karras et al., 2020b) configured by insights from the generative art community that improve the quality of single-objective latent space optimization. For this domain, we configure CMA-MAEGA and CMA-MEGA to use a “basic” restart rule because the latent space L2 regularization keeps solutions in the StyleGAN2 training distribution. For this experiment, the latent space is large (n = 9216), so we exclude CMA-ME and CMA-MAE due to the size of the covariance matrix (9216× 9216) and the prohibitive cost for computing an eigendecomposition of a large covariance matrix.
Linear Projection (sphere, Rastrigin, plateau).
• MAP-Elites: σ = 0.5
• MAP-Elites (line): σ1 = 0.5, σ2 = 0.2
• CMA-ME: σ = 0.5, µ selection, basic restart rule
• CMA-ME*: σ = 0.5, filter selection, no improvement restart rule
• CMA-ME (imp, opt): σ = 0.5, µ selection, basic restart rule, 7 optimizing and 8 improvement emitters
• CMA-MAE: σ = 0.5, α = 0.01, minf = 0, µ selection, basic restart rule
• CMA-MEGA: σg = 10.0, η = 1.0, basic restart rule, gradient ascent optimizer
• CMA-MAEGA: σg = 10.0, η = 1.0, α = 0.01, minf = 0, basic restart rule, gradient ascent optimizer
Arm Repertoire.
• MAP-Elites: σ = 0.1 • MAP-Elites (line): σ1 = 0.1, σ2 = 0.2 • CMA-ME: σ = 0.2, µ selection, basic restart rule • CMA-ME*: σ = 0.2, filter selection, no improvement restart rule • CMA-ME (imp, opt): σ = 0.2, µ selection, basic restart rule,
7 optimizing and 8 improvement emitters • CMA-MAE: σ = 0.2, α = 0.01, minf = 0, µ selection, basic restart rule • CMA-MEGA: σg = 0.05, η = 1.0, basic restart rule, gradient ascent optimizer • CMA-MAEGA: σg = 0.05, η = 1.0, α = 0.01, minf = 0, basic restart rule, gradient
ascent optimizer
Latent Space Illumination. (StyleGAN)
• MAP-Elites: σ = 0.2 • MAP-Elites (line): σ1 = 0.1, σ2 = 0.2 • CMA-ME: σ = 0.02, filter selection, no improvement restart rule • CMA-MAE: σ = 0.02, α = 0.1, minf = 55, µ selection, no improvement restart rule, 50
iteration timeout • CMA-MEGA: σg = 0.002, η = 0.002, Adam optimizer, no improvement restart rule • CMA-MAEGA: σg = 0.002, η = 0.002, α = 0.1, minf = 55, Adam optimizer, no
improvement restart rule, 50 iteration timeout
Latent Space Illumination. (StyleGAN 2)
• MAP-Elites: σ = 0.1 • MAP-Elites (line): σ1 = 0.1, σ2 = 0.2 • CMA-MEGA: σg = 0.01, η = 0.05, Adam optimizer, basic restart rule • CMA-MAEGA: σg = 0.01, η = 0.05, α = 0.02, minf = 0, Adam optimizer, basic restart
rule
Adam Hyperparameters. We use the same hyperparameters as previous work Perez (2021); Fontaine & Nikolaidis (2021a).
• β1 = 0.9 • β2 = 0.999
Archives. For the linear projection and arm repertoire domains, we initialize an archive of 100× 100 cells for all algorithms. For latent space illumination we initialize an archive of 200× 200 cells for all algorithms, following Fontaine & Nikolaidis (2021a).
B DOMAIN DETAILS
To experimentally evaluate both CMA-MAE and CMA-MAEGA, we select domains from Fontaine & Nikolaidis (2021a): linear projection (Fontaine et al., 2020), arm repertoire (Cully & Demiris, 2017), and latent space illumination (Fontaine et al., 2021b). While many quality diversity optimization domains exist, we select these because gradients of f and m are easy to compute analytically and allow us to evaluate DQD algorithms in addition to derivative-free QD algorithms. To evaluate the good exploration properties of CMA-MAE on flat objectives, we introduce a variant of the linear projection domain to include a “plateau” objective function.
Linear Projection. The linear projection domain (Fontaine et al., 2020) was introduced to benchmark distortions caused by mapping a high-dimensional search space to a low-dimensional measure space.
The domain forms a 2D measure space by a linear projection that bounds the contribution of each component θi of the projection to the range [−5.12, 5.12]. QD algorithms must adapt the step size of each component θi to slowly approach the extremes of the measure space, with a harsh penalty for components outside [−5.12, 5.12]. As QD domains must provide an objective, the linear projection domain included two objectives from the black-box optimization benchmarks (Hansen et al., 2016; 2010): sphere and Rastrigin. Following Fontaine et al. (2020), we run all experiments for n = 100.
Formally, the measure functions are defined as a linear projection, a weighted sum of the components θi ∈ R of a solution θ ∈ Rn. The first measure function m1 is a weighted sum of the first half of the solution θ, and the second measure function m2 is a weighted sum of the second half of the solution θ (see Eq. 3). To ensure that all solutions mapped to measure space occupy a finite volume, the contribution in measure space of each component θi is bounded to the range [−5.12, 5.12] via a clip function (see Eq. 2) that applies a harsh penalty for solution components θi stepping outside the range [−5.12, 5.12].
clip(θi) = { θi if −5.12 ≤ θi ≤ 5.12 5.12/θi otherwise
(2)
m(θ) = ⌊n2 ⌋∑ i=1 clip(θi), n∑ i=⌊n2 ⌋+1 clip(θi) (3) Fig. 6 visualizes why the linear projection domain is challenging. First, we note that the density of solutions in search space mapped to measure space mostly occupies the region close to 0. To justify why, consider sampling uniformly in the hypercube [−5.12, 5.12]n in search space. We note that each of these points maps to the linear region of the measure functions and each of our measures becomes a sum of random variables. If we divide by n, we normalize by the dimensions of the search space, then the measure functions become an average of random variables. The average of n uniform random variables is the Bates distribution (Johnson et al., 1995), a distribution that narrows in variance as n grows larger. Without the clip function, a QD algorithm could simply increase a single θi to reach any point in the measure space. However, the clip function prevents this by bounding the contribution of each component of θ to the range [−5.12, 5.12]. To reach the extremes of measure space all components θi must converge to the extremums ±5.12. The linear projection domain is challenging to explore due to both the clustering of solutions in a small region of measure space and the heavy measure space penalties applied by the clip function when a component θi leaves the region [−5.12, 5.12]. Next, we describe the linear projection domain’s objective functions visualized in Fig. 7.
The objectives of the linear projection domain satisfy the requirements that a QD domain needs to have an objective and are of lesser importance than the measure function definitions, since the benchmark primarily evaluates exploration capabilities. Fontaine et al. (2020) selected two objectives from the black-box optimization benchmarks competition (Hansen et al., 2016; 2010): sphere and Rastrigin. The sphere function (Eq. 4) is a quadratic function2, while the Rastrigin function (Eq. 5) is a multi-modal function that when smoothed is quadratic. The domain shifts the global optimum to the position θi = 5.12 · 0.4 = 2.048.
fsphere(θ) = n∑ i=1 θ2i (4)
fRastrigin(θ) = 10n+ n∑ i=1 [θ2i − 10 cos(2πθ2i )] (5)
We introduce an additional objective to evaluate the good exploration properties of CMA-MAE on flat objectives. Our “plateau” objective function (Eq. 7) is constant everywhere, but with a quadratic penalty for each component outside the range [−5.12, 5.12]. The penalty acts as a regularizer to encourage algorithms to search in the linear region of measure space.
fplateau(θi) = { 0 if −5.12 ≤ θi ≤ 5.12 (|x| − 5.12)2 otherwise (6)
fplateau(θ) = 1
n n∑ i=1 fplateau(θi) (7)
Arm Repertoire. The arm repertoire domain Cully & Demiris (2017); Vassiliades & Mouret (2018) tasks QD algorithms to find a diverse collection of arm positions for an n-dimensional planar robotic arm with revolute joints. The measures in this domain are the 2D coordinates of the robot’s end-effector and the objective is to minimize the variance of the joint angles.
In Fig. 8, we visualize example arms for n = 5 (5-DOF). The optimal solutions in this domain have 0 variance between all joint angles. The measure functions are bounded to the range [−n, n] as each arm segment has a unit length. The reachable cells form a circle of radius n. Therefore, the optimal archive coverage is approximately πn 2
4n2 ≈ 78.5%. An archive can achieve an upper-bound of this ratio that becomes tighter at higher resolutions. We select n = 100 (100-DOF) arms for the experiments.
Latent Space Illumination. Prior work introduced the latent space illumination problem (Fontaine et al., 2021b), the problem of searching the latent space of a generative model with a quality diversity algorithm. We evaluate on the StyleGAN+CLIP version of this problem (Fontaine & Nikolaidis,
2In derivative-free optimization many of the benchmark functions are named after the shape of the contour lines. In the case of quadratic functions with an identity Hessian matrix, the contour lines form hyperspheres.
2021a), by searching the latent space of StyleGAN (Karras et al., 2019) with a QD algorithm. We form the differentiable objective and measures in this domain by specifying text prompts to the CLIP model (Radford et al., 2021) that can determine the similarity of an image and text. We specify an objective prompt of “A photo of Beyonce”. For measures, we would like to have CLIP quantify abstract concepts like the hair length or age of the person in the photo. However, CLIP can only determine similarity of an image and a text prompt. As surrogates for age and hair length, we specify the measure prompts of “A small child” and “A woman with long blonde hair”. The objective and measure functions guide the QD algorithms towards discovering a collection of photos of Beyoncé with varying age and hair length.
For our additional LSI experiment on StyleGAN2 with setup improvements, see Appendix I.
Transformations of the Objective Function. We highlight two issues that must be addressed by transforming the objective in each domain. First, we note that the problem definition in each of our domains contains an objective f that must be minimized. In contrast, the QD problem definition specifies an objective f that must be maximized. Second, the QD-score metric, which measures the performance of QD algorithms, requires a non-negative objective function. Following prior work (Fontaine et al., 2020; Fontaine & Nikolaidis, 2021a), we transform the objective f via a linear transformation: f ′ = af + b. The linear transformation maps function outputs to the range [0, 100].
In the linear projection domain, we estimate the largest objective value for the sphere and Rastrigin function within the region [−5.12, 5.12] for each solution component θi. We compute f(−5.12,−5.12, ...,−5.12) for each objective as the maximum. The minimum of each function is 0. We calculate the linear transformation as:
f ′(θ) = 100 · f(θ)− fmax fmin − fmax
(8)
For our new plateau objective, all solution points within the region [−5.12, 5.12]n have objective value of 0. For this objective we set fmin = 0 and fmax = 100 and apply the transformation in Eq. 8.
For the arm domain we select fmin = 0 and fmax = 1, and in the LSI domain we select fmin = 0 and fmax = 10. We select these values to match Fontaine & Nikolaidis (2021a).
C IMPLEMENTATION
We replicate the implementation details of prior work (Fontaine & Nikolaidis, 2021a).
Archives. For the linear projection and arm repertoire domains, we initialize an archive of 100× 100 cells for all algorithms. For latent space illumination we initialize an archive of 200× 200 cells for all algorithms, following previous work (Fontaine & Nikolaidis, 2021a).
Metrics. We use the sum of f values of all cells in the archive, defined as the QD-score Pugh et al. (2015), as a metric for the quality and diversity of solutions. Following Fontaine & Nikolaidis (2021a), we normalize the QD-score by the total number of cells, both occupied and unoccupied, to make QD-score invariant to the resolution of the archive. We additionally compute the coverage, defined as the number of occupied cells in the archive divided by the total number of cells.
Computational Resources. We ran all trials of the linear projection and arm repertoire domains on an AMD Ryzen Threadripper 32-core (64 threads) processor. A run of 20 trials in parallel takes about 20 minutes for the linear projection domain and 25 minutes for the arm repertoire domain. For the latent space illumination domain, we accelerate the StyleGAN+CLIP pipeline on a GeForce RTX 3090 Nvidia GPU. One trial for latent space illumination takes approximately 2 hours and 30 minutes for StyleGAN and approximately 3 hours and 30 minutes for StyleGAN2. In all domains, runtime increases when an algorithm obtains better coverage, because we iterate over the archive when QD statistics are calculated.
Software Implementation. We use the open source Pyribs (Tjanaka et al., 2021) library for all algorithms. We implemented the CMA-MAE and CMA-MAEGA algorithms using the same library.
D COVARIANCE MATRIX ADAPTATION MAP-ELITES VIA A GRADIENT
ARBORESCENCE (CMA-MAEGA)
In this section, we provide information of the CMA-MEGA differentiable quality diversity (DQD) algorithm, and we derive CMA-MAE’s DQD counterpart: CMA-MAEGA.
CMA-MEGA. Covariance Matrix Adaptation MAP-Elites via Gradient Arborescence (CMA-MEGA) solves the DQD problem, where the objective f and measures m are first-order differentiable. Like CMA-ME, the algorithm maintains a solution point θ ∈ Rn and a MAP-Elites archive. CMA-MEGA samples new solutions by perturbing the search point θ via the objective and measure gradients. However, the contribution of each gradient is balanced by gradient coefficients c: θi = θ + c0∇f(θ) + ∑k j=1 cj∇mj(θ). These coefficients are sampled from a multivariate Gaussian distribution N(µ,Σ) maintained by the algorithm. After sampling new candidate solutions θi, the solutions are ranked via the improvement ranking from CMA-ME. CMA-MEGA updates N(µ,Σ) via the CMA-ES update rules and the algorithm steps θ also in the direction of largest archive improvement. The authors showed that CMA-MEGA approximates a natural gradient step of the QD objective (Eq. 1), but with respect to the gradient coefficients.
CMA-MAEGA. We note that our augmentations to the CMA-ME algorithm only affects how we replace solutions in the archive and how we calculate ∆i. Both CMA-ME and CMA-MAEGA replace solutions and calculate ∆i identically, so we apply the same augmentations from CMA-ME to CMA-MEGA to form a new DQD algorithm, CMA-MAEGA. Algorithm 2 shows the pseudo-code for CMA-MAEGA with the differences from CMA-MEGA highlighted in yellow.
Algorithm 2 Covariance Matrix Adaptation MAP-Annealing via a Gradient Arborescence (CMA-MAEGA) CMA-MAEGA (evaluate,θ0, N, λ, η, σg, minf , α)
input : An evaluation function evaluate that computes the objective, the measures, and the gradients of the objective and measures, an initial solution θ0, a desired number of iterations N , a branching population size λ, a learning rate η, an initial step size for CMA-ES σg , a minimal acceptable solution quality minf , and an archive learning rate α.
result :Generate N(λ+ 1) solutions storing elites in an archive A. 1 Initialize solution parameters θ to θ0, CMA-ES parameters µ = 0, Σ = σgI , and p, where we let p be the CMA-ES internal parameters. 2 Initialize the archive A and the acceptance threshold te with minf for each cell e. 3 for iter ← 1 to N do 4 f,∇f ,m,∇m ← evaluate(θ) 5 ∇f ← normalize(∇f ),∇m ← normalize(∇m) 6 if f > te then 7 Replace the current elite in cell e of the archive A with θi 8 te ← (1− α)te + αf 9 end
10 for i← 1 to λ do 11 c ∼ N (µ,Σ) 12 ∇i ← c0∇f + ∑k j=1 cj∇mj 13 θ′i ← θ +∇i 14 f ′, ∗,m′, ∗ ← evaluate(θ′i) 15 ∆i ← f ′ − te 16 if f ′ > te then 17 Replace the current occupant in cell e of the archive A with θi 18 te ← (1− α)te + αf ′ 19 end 20 end 21 rank ∇i by ∆i 22 ∇step ← ∑λ i=1 wi∇rank[i] 23 θ ← θ + η∇step 24 Adapt CMA-ES parameters µ,Σ,p based on improvement ranking ∆i 25 if there is no change in the archive then 26 Restart CMA-ES with µ = 0,Σ = σgI . 27 Set θ to a randomly selected existing cell θi from the archive 28 end 29 end
Experiments. We compare CMA-MEGA and CMA-MAEGA in the five benchmark domains. Table 3 and Table 4 shows the QD-score and coverage values for each algorithm and domain, averaged over 20 trials for the linear projection (LP) and arm repertoire domains and over 5 trials for the LSI domains. We conducted a two-way ANOVA to examine the effect of the algorithm and domain (LP (sphere), LP (Rastrigin), LP (plateau), arm repertoire, LSI (StyleGAN), and LSI (StyleGAN2) on the QD-score. There was a significant interaction between the search algorithm and the domain (F (5, 168) = 165.7, p < 0.001). Simple main effects analysis with Bonferroni corrections showed that CMAMAEGA outperformed CMA-MEGA in the LP (sphere), arm repertoire, and LSI (StyleGAN2) domains. There was no statistically significance difference between the two algorithms in the LP (Rastrigin), LP (plateau), and LSI (StyleGAN) domains.
We attribute the absence of a statistical difference in the QD-score between the two algorithms on the LP (Rastrigin) and LP (plateau) domains on the perfect coverage obtained by both algorithms. Thus, any differences in QD-score are based on the objective values of the solutions returned by each algorithm. In LP (plateau), the optimal objective for each cell is easily obtainable for both methods. The LP (Rastrigin) domain contains many local optima, because of the form of the objective function
(Eq. 5). CMA-MEGA will converge to these optima before restarting, behaving as a single-objective optimizer within each local optimum. Because of the large number of local optima in the domain, it results in higher QD-score.
In the LSI (StyleGAN) domain, we attribute similar performance between CMA-MEGA and CMA-MAEGA to the restart rules used to keep each search within the training distribution of StyleGAN. On the other hand, in the LSI (StyleGAN2) domain, we regularize the search space by an L2 penalty in latent space, allowing for a larger learning rate and a basic restart rule for both algorithms, while still preventing drift out of the training distribution of StyleGAN2. Because of the fewer restarts, CMA-MAEGA can take advantage of the density descent property, which was shown to improve exploation in CMA-MAE, and outperform CMA-MEGA. We note that because StyleGAN2 has a better conditioning on the latent space (Karras et al., 2020b), it is better suited for gradient-based optimizers, which helps better distinguish between the two algorithms.
E THEORETICAL PROPERTIES OF CMA-MAE
Theorem E.1. The CMA-ES algorithm is equivalent to CMA-MAE when α = 0, if CMA-ES restarts from an archive solution.
Proof. CMA-ES and CMA-MAE differ only on how they rank solutions. CMA-ES ranks solutions purely based on the objective f , while CMA-MAE ranks solutions by f − te, where te is the acceptance threshold initialized by minf . Thus, to show that CMA-ES is equivalent to CMA-MAE for α = 0, we only need to show that they result in identical rankings.
In CMA-MAE, te is updated as follows: te ← (1− α)te + αf . For α = 0, te = minf is invariant for the whole algorithm: te ← 1te + 0f = te. Therefore, CMA-MAE ranks solutions based on f −minf . However, comparison-based sorting is invariant to order-preserving transformations of the values being sorted Hansen (2016). Thus, CMA-ES and CMA-MAE rank solutions identically.
Next, we prove that CMA-ME is equivalent to CMA-MAE with the following caveats. First, we assume that CMA-ME restarts only with the CMA-ES restart rules, rather than the additional “no improvement” restart condition from Fontaine et al. (2020). Second, we assume that both CMA-ME and CMA-MAE leverage µ selection rather than filtering selection.
Lemma E.2. During execution of the CMA-MAE algorithm with α = 1, the threshold te is equal to f(θe) for cells that are occupied by a solution θe and to minf for all empty cells.
Proof. We will prove the lemma by induction. All empty cells are initialized with te = minf , satisfying the basis step. Then, we will show that if the statement holds after k archive updates, it will hold after a subsequent update k + 1.
Assume that at step k we generate a new solution θi mapped to a cell e. We consider two cases:
Case 1: The archive cell e is empty. Then, f(θi) > minf and both CMA-ME and CMA-MAE will place θi in the archive as the new cell occupant θe. The threshold te is updated as te = (1− α)te + αf(θe) = 0minf + 1f(θe) = f(θe). Case 2: The archive cell e contains an incumbent solution θe. Then, either f(θi) ≤ f(θe) or f(θi) > f(θe). If f(θi) ≤ f(θe), then the archive does not change and the inductive step holds via the inductive hypothesis. If f(θi) > f(θe), then θi becomes the new cell occupant θe and te is updated as te = (1− α)te + αf(θe) = 0te + 1f(θe) = f(θe).
Theorem E.3. The CMA-ME algorithm is equivalent to CMA-MAE when α = 1 and minf is an arbitrarily large negative number.
Proof. Both CMA-ME and CMA-MAE rank candidate solutions θi based on improvement values ∆i. While CMA-ME and CMA-MAE compute ∆i differently, we will show that for α = 1, the rankings are identical for the two algorithms.
We assume a new candidate solution mapped to a cell e. We describe first the computation of ∆i for CMA-ME. CMA-ME ranks solutions that discover an empty cell based on their objective value. Thus,
if θi discovers an empty cell, ∆i = f(θi). On the other hand, if θi is mapped to a cell occupied by another solution θe, it will rank θi based on the improvement ∆i = f(θi)− f(θe). CMA-ME performs a two-stage ranking, where it ranks all solutions that discover empty cells before solutions that improve occupied cells.
We now show the computation of ∆i for CMA-MAE with α = 1. If θi discovers an empty cell ∆i = f(θi) − te and by Lemma E.2 ∆i = f(θi) −minf . If θi is mapped to a cell occupied by another solution θe, ∆i = f(θi)− te and by Lemma E.2 ∆i = f(θi)− f(θe). Comparing the values ∆i between the two algorithms we observe the following: (1) If θi discovers an empty cell, ∆i = f(θi)−minf for CMA-MAE. However, minf is a constant and comparisonbased sorting is invariant to order preserving transformations (Hansen, 2016), thus ranking by ∆i = f(θi) − minf is identical to ranking by ∆i = f(θi) performed by CMA-ME. (2) If θi is mapped to a cell occupied by another solution θe, ∆i = f(θi) − f(θe) for both algorithms. (3) Because minf is an arbitrarily large negative number f(θi) −minf > f(θi) − f(θe). Thus, CMA-MAE will always rank solutions that discover empty cells before solutions that are mapped to occupied cells, identically to CMA-ME.
We next provide theoretical insights on how the discount function fA smoothly increases from a constant function minf to CMA-ME’s discount function as α increases from 0 to 1. We show this for the special case of a fixed sequence of candidate solutions. Theorem E.4. Let αi and αj be two archive learning rates for archives Ai and Aj such that 0 ≤ αi < αj ≤ 1. For two runs of CMA-MAE that generate the same sequence of m candidate solutions {S} = θ1,θ2, ...,θm, it follows that fAi(θ) ≤ fAj (θ) for all θ ∈ Rn.
Proof. We prove the theorem via induction over the sequence of solution additions. fA is the histogram formed by the thresholds te over all archive cells e in the archive. Thus, we prove fAi ≤ fAj by showing that te(Ai) ≤ te(Aj) for all archive cells e after m archive additions. As a basis step, we note that Ai equals Aj as both archives are initialized with minf .
Our inductive hypothesis states that after k archive additions we have te(Ai) ≤ te(Aj), and we need to show that te(Ai) ≤ te(Aj) after solution θk+1 is added to each archive. Our solution θk+1 has three cases with respect to the acceptance thresholds:
Case 1: f(θk+1) ≤ te(Ai) ≤ te(Aj). The solution is not added to either archive and our property holds from the inductive hypothesis.
Case 2: te(Ai) ≤ f(θk+1) ≤ te(Aj). The solution is added to Ai, but not Aj , thus t′e(Aj) = te(Aj). We follow the threshold update: t′e(Ai) = (1− αi)te(Ai) + αif(θk+1). Next, we need to show that t′e(Ai) ≤ t′e(Aj) to complete the inductive step:
(1− αi)te(Ai) + αif(θk+1) ≤ f(θk+1) ⇐⇒ (1− αi)te(Ai) ≤ (1− αi)f(θk+1) ⇐⇒
te(Ai) ≤ f(θk+1) as 1− αi ≥ 0
The last inequality holds true per our initial assumption for Case 2. From the inductive hypothesis, we have f(θk+1) ≤ te(Aj) = t′e(Aj). Case 3: te(Ai) ≤ te(Aj) ≤ f(θk+1). The solution θk+1 is added to both archives. We need to show that t′e(Ai) ≤ t′e(Aj):
t′e(Ai) ≤ t′e(Aj) ⇐⇒ (1− αi)te(Ai) + αif(θk+1) ≤ (1− αj)te(Aj) + αjf(θk+1) (9)
We can rewrite Eq. 9 as:
(1− αj)te(Aj)− (1− αi)te(Ai) + αjf(θk+1)− αif(θk+1) ≥ 0 (10)
First, note that:
(1− αj)te(Aj)− (1− αi)te(Ai) ≥ (1− αj)te(Ai)− (1− αi)te(Ai) = (1− αj − 1 + αi)te(Ai) = (αi − αj)te(Ai).
Thus: (1− αj)te(Aj)− (1− αi)te(Ai) ≥ (αi − αj)te(Ai) (11)
From Eq. 10 and 11 we have:
(1− αj)te(Aj) + αjf(θk+1)− (1− αi)te(Ai)− αif(θk+1) ≥ (αi − αj)te(Ai) + (αj − αi)f(θk+1) = (αj − αi)(f(θk+1)− te(Ai))
As αj > αi and f(θk+1) ≥ te(Ai), we have (αj − αi)(f(θk+1) − te(Ai)) ≥ 0. This completes the proof that Eq. 10 holds.
As all cases in our inductive step hold, our proof by induction is complete.
Next, we wish to provide insights about the exploration properties of CMA-MAE for an archive learning rate α between 0 and 1, when the objective f is constant. Consider an approximate density descent algorithm that is identical to CMA-ME, but differs by how solutions are ranked. Specifically, the algorithm maintains a histogram of occupancy counts oe for each cell e, with oe representing the number of times a solution was generated in that cell. This algorithm descends the density histogram by ranking solutions based on the occupancy count of the cell that the solution maps to, where solutions that discover less frequently visited cells are ranked higher.
Lemma E.5. The threshold te after k additions to cell e forms a strictly increasing sequence for a constant objective function f(θ) = C for all θ ∈ Rn, when 0 < α < 1 and minf < C.
Proof. To show that te after k additions to cell e forms a strictly increasing sequence, we write a recurrence relation for te after k solutions have been added to cell e. Let te(k) = (1 − α)te(k − 1) + αf(θi) and te(0) = minf be that recurrence relation. To show the recurrence is an increasing function, we need to show that te(k) > te(k − 1) for all k ≥ 0. We prove the inequality via induction over cell additions k. As a basis step, we show te(1) > te(0): (1− α)minf + αC > minf ⇐⇒ minf −minf − α ·minf + αC ⇐⇒ αC > α ·minf . As C > minf and α > 0, the basis step holds.
For the inductive step, we assume that te(k) > te(k − 1) and need to show that te(k + 1) > te(k): te(k + 1) > te(k) ⇐⇒ (1 − α)te(k) + αC > (1 − α)te(k − 1) + αC ⇐⇒ (1 − α)te(k) > (1− α)te(k − 1) ⇐⇒ te(k) > te(k − 1).
Theorem E.6. The CMA-MAE algorithm optimizing a constant objective function f(θ) = C for all θ ∈ Rn is equivalent to the approximate density descent algorithm, when 0 < α < 1 and minf < C.
Proof. We will prove that for an arbitrary archive A with both the occupancy count for each cell oe and the threshold value te computed with arbitrary learning rate 0 < α < 1, CMA-MAE results in the same ranking for an arbitrary batch of solutions {θi} as the approximate density descent algorithm. We let θi and θj be two arbitrary solutions in the batch mapped to cells ei and ej . Without of loss of generality, we let oei ≤ oej . The approximate density descent algorithm will thus rank θi before θj . We will show that CMA-MAE results in the same ranking.
If oei ≤ oej , and since te is a strictly increasing function from Lemma E.5: tei(oei) ≤ tej (oej ). We have tei(oei) ≤ tej (oej ) ⇐⇒ C − tei(oei) ≥ C − tej (oej ). Thus, the archive improvement by adding θi to the archive is larger than the improvement by adding θj and CMA-MAE will rank θi higher than θj , identically with density descent.
While Theorem E.6 assumes a constant objective f , we conjecture that the theorem holds true generally when threshold te in each cell e approaches the local optimum within the cell boundaries.
Conjecture E.7. The CMA-MAE algorithm becomes equivalent to the density descent algorithm for a subset of archive cells for an arbitrary convex objective f , where the cardinality of the subset of cells increases as the number of iterations increases.
We provide intuition for our conjecture through the lense of the elite hypervolume hypothesis (Vassiliades & Mouret, 2018). The elite hypervolume hypothesis states that optimal solutions for the MAP-Elites archive form a connected region in search space. Later work (Rakicevic et al., 2021), connected the elite hypervolume hypothesis to the manifold hypothesis (Fefferman et al., 2016) in machine learning, stating that the elite hypervolume can be represented by a low dimensional manifold in search space.
For our conjecture, we assume that the elite hypervolume hypothesis holds and there exists a smooth manifold that represents the hypervolume. Next, we assume in the conjecture that f is an arbitrary convex function. As f is convex, early in the CMA-MAE search the discount function fA will be flat and the search point θ will approach the global optimum following CMA-ES’s convergence properties (Hansen & Ostermeier, 1997; Hansen et al., 2003), where the precision of convergence is controlled by archive learning rate α. By definition, the global optimum θ∗ is within the elite hypervolume as no other solution of higher quality exists within its archive cell. Assuming the elite hypervolume hypothesis holds, a subset of adjacent solutions in search space will also be in the hypervolume due to the connectedness of the hypervolume. As fA increases around the global optimum, we conjecture that the function f(θ∗)− fA(θ∗) will form a plateau around the optimum, since it will approach the value f(θi)− fA(θi) of adjacent solutions θi. By Theorem E.6 we have a density descent algorithm within the plateau, pushing CMA-MAE to discover solutions on the frontier of the known hypervolume.
Finally, we remark that our conjecture implies that f − fA tends towards a constant function in the limit, resulting in a density descent algorithm across the elite hypervolume manifold as the number of generated solutions approaches infinity. We leave a formal proof of this conjecture for future work.
F THEORETICAL PROPERTIES OF CMA-MAEGA
In this section, we investigate how the theoretical properties of CMA-MAE apply to CMA-MAEGA. While many of the properties are nearly a direct mapping, we note that, while CMA-MAE is equivalent to the single-objective optimization algorithm CMA-ES for α = 0, there is no single-objective counterpart to CMA-MAEGA. To make the direct mapping easier, we introduce a counterpart: the gradient arborescence ascent algorithm.
The gradient arborescence ascent algorithm is similar to CMA-MEGA, but without an archive. Like CMA-MEGA, the algorithm assumes a differentiable objective f and differentiable measures m. However, the algorithm leverages the objective and measure function gradients only to improve the optimization of the objective f , rather than to find solutions that are diverse with respect to measures m. As with CMA-MEGA, the gradient arborescence algorithm branches in objective-measure space. However, the algorithm ranks solutions purely by the objective function f and adapts the coefficient distribution N(µ,Σ) towards the natural gradient of the objective f .
Next, we prove properties of CMA-MAEGA that directly follow from the properties of CMA-MAE.
Theorem F.1. The gradient arborescence ascent algorithm is equivalent to CMA-MAEGA when α = 0, if gradient arborescence ascent restarts from an archive elite.
Proof. We note that CMA-MAEGA and the gradient arborescence ascent algorithm differ only in how they rank solutions, and we note that the differences between CMA-MAE and CMA-ES mirror the differences between CMA-MAEGA and gradient arborescence ascent algorithm. So by directly adapting the proof of Theorem E.1, we complete our proof.
Theorem F.2. The CMA-MEGA algorithm is equivalent to CMA-MAEGA when α = 1 and minf is an arbitrarily large negative number.
Proof. We note that CMA-MAEGA and the CMA-MEGA algorithm differ only in how they rank solutions and how they update the archive A, and we note that the differences between CMA-MAE and CMA-ME mirror the differences between CMA-MAEGA and CMA-MEGA. So by directly adapting the proof of Theorem E.3, we complete our proof.
Theorem F.3. Let αi and αj be two archive learning rates for archives Ai and Aj such that 0 ≤ αi < αj ≤ 1. For two runs of CMA-MAEGA that generate the same sequence of m candidate solutions {S} = θ1,θ2, ...,θm, it follows that fAi(θ) ≤ fAj (θ) for all θ ∈ Rn.
Proof. We note that CMA-MAE and CMA-MAEGA update the archive A in exactly the same way. Therefore, the proof follows directly by adapting the proof of Theorem E.4 to CMA-MAEGA.
Next, we wish to show that CMA-MAEGA results in density descent in measure space. However, we need a counterpart to the approximate density descent algorithm we defined in Theorem E.6.
Consider an approximate density descending arborescence algorithm that is identical to CMA-MEGA, but differs by how solutions are ranked. Specifically, we assume that this algorithm maintains an occupancy count oe for each cell e, which is the number of times a solution was generated in that cell. The density descent algorithm ranks solutions based on the occupancy count of the cell that the solution maps to, where solutions that discover less frequently visited cells are ranked higher. The algorithm takes steps in search space Rn that minimize the approximate density function defined by the archive and adapts the coefficient distribution N(µ,Σ) towards coefficients that minimize the density function. Theorem F.4. The CMA-MAEGA algorithm optimizing a constant objective function f(θ) = C for all θ ∈ Rn is equivalent to the approximate density descending arborescence algorithm, when 0 < α < 1 and minf < C.
Proof. The proof of Theorem E.6 relies only on how CMA-MAE updates the archive A and acceptance threshold te. The proof of this theorem follows directly by adapting the proof of Theorem E.6 to CMA-MAEGA.
G DERIVATION OF THE CONVERSION FORMULA FOR THE ARCHIVE LEARNING RATE
In this section, we derive the archive learning rate conversion formula α2 = 1− (1−α1)r mentioned in Section 7 of the main paper, where r is the ratio between archive cell counts, and α1 and α2 are archive learning rates for two archives A1 and A2.
Given an archive learning rate α1 for A1, we want to derive an equivalent archive learning rate α2 for A2 that results in robust performance when CMA-MAE is run with either A1 or A2. A principled way to derive a conversion formula for α2 is to look for an invariance property that affects the performance of CMA-MAE and that holds when CMA-MAE generates solutions in archives A1 and A2.
Since CMA-MAE ranks solutions by f − fA, we wish for fA to increase at the same rate in the two archives. Since fA(θ) = te, where te is the cell that a solution θ maps to, we select the average value of the acceptance thresholds te over all cells in each archive as our invariant property.
We assume an arbitrary sequence of N solution additions θ1,θ2, ...,θN , evenly dispersed across the archive cells. We then specify te as a function that maps k cell additions to a value te in archive cell e.3 Equation 12 then defines the average value of te across the archive after N additions to an archive A with M cells.
1
M M∑ i=1 te ( N M ) (12)
Then, equation 13 defines the invariance we want to guarantee between archives A1 and A2. 3Here we abuse notation and view te as a function instead of threshold for simplicity and to highlight the connection to the threshold value te.
1 M1 M1∑ i=1 te ( | 1. What is the focus of the paper regarding the Qualty-Diversity problem in machine learning?
2. What are the strengths and weaknesses of the proposed approach compared to existing methods?
3. Do you have any concerns about the methodology or experiments presented in the paper?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any comparisons with other works that could be made to validate the proposal further? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper proposes a variant of an already existing algorithm CMA-ME for Qualty-Diversity (QD) problems by introducing a learning rate based annealing function.
Strengths And Weaknesses
Strength:
The paper presents a systematic study with adequate empirical experiments.
Weakness:
Significance and relevance of QD problem in machine learning along with some warm-up examples should have been given in the introduction.
The simple algorithmic change in CMA-ME is seemingly not significant enough to warrant a publication in ICLR. The theoretical proofs seem quite standard and straightforward.
For Figure 3, why are you plotting the QD score of different algorithms against number of iterations? WOn't it be such that these different algorithms will perform different amounts of jobs in their inner loops and hence, their "iterations" will not consume same amount of time?
The proposal was not adequatey validated through comparison against evolutionary algorithms of different genre than CMA-ES, like DE for QD problems. Also the multi-objective optimization approaches like the following one should have been compared:
Thomas Pierrot, Guillaume Richard, Karim Beguir, and Antoine Cully. 2022. Multi-objective quality diversity optimization. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO '22). Association for Computing Machinery, New York, NY, USA, 139–147. https://doi.org/10.1145/3512290.3528823
Why the parametric ANOVA test was used instead of non-parametric statistical tests?
Despite focusing on derivative-free QD algorithms, did you really handle a non-dfferentiable function like Weirstrass? Or I am missing something here?
Clarity, Quality, Novelty And Reproducibility
Given the earlier works on CMA-ME and QD problems, the novelty seems less than expected for ICLR. The linguistic quality of the paper needs very significant improvement.
The reproducibility aspect is appreciable, the authors released detailed source codes. |
ICLR | Title
Covariance Matrix Adaptation MAP-Annealing
Abstract
Single-objective optimization algorithms search for the single highest-quality solution with respect to an objective. Quality diversity (QD) algorithms, such as Covariance Matrix Adaptation MAP-Elites (CMA-ME), search for a collection of solutions that are both high-quality with respect to an objective and diverse with respect to specified measure functions. However, CMA-ME suffers from three major limitations highlighted by the QD community: prematurely abandoning the objective in favor of exploration, struggling to explore flat objectives, and having poor performance for low-resolution archives. We propose a new quality diversity algorithm, Covariance Matrix Adaptation MAP-Annealing (CMA-MAE), that addresses all three limitations. We provide theoretical justifications for the new algorithm with respect to each limitation. Our theory informs our experiments, which support the theory and show that CMA-MAE achieves state-of-the-art performance.
1 INTRODUCTION
Consider an example problem of searching for celebrity faces in the latent space of a generative model. As a single-objective optimization problem, we specify an objective f that targets a celebrity such as Tom Cruise. A single-objective optimizer, such as CMA-ES (Hansen, 2016), will converge to a single solution of high objective value, an image that looks like Tom Cruise as much as possible.
However, this objective has ambiguity. How old was Tom Cruise in the photo? Did we want the person in the image to have short or long hair? By instead framing the problem as a quality diversity optimization problem, we additionally specify a measure function m1 that quantifies age and a measure function m2 that quantifies hair length. A quality diversity algorithm (Pugh et al., 2015; Chatzilygeroudis et al., 2021), such as CMA-ME (Fontaine et al., 2020), can then optimize for a collection of images that are diverse with respect to age and hair length, but all look like Tom Cruise.
While previous work (Fontaine et al., 2020; 2021a;b; Earle et al., 2021) has shown that CMA-ME solves such QD problems efficiently, three important limitations of the algorithm have been discovered. First, on difficult to optimize objectives, variants of CMA-ME will abandon the objective too soon (Tjanaka et al., 2022), and instead favor exploring the measure space, the vector space defined by the measure function outputs. Second, the CMA-ME algorithm struggles to explore flat objective functions (Paolo et al., 2021). Third, CMA-ME works well on high-resolution archives, but struggles to explore low-resolution archives (Cully, 2021; Fontaine & Nikolaidis, 2021a). We note that the chosen archive resolution affects the performance of all current QD algorithms.
We propose a new algorithm, CMA-MAE, that addresses these three limitations.
To address the first limitation, we derive an algorithm that smoothly blends between CMA-ES and CMA-ME. First, consider how CMA-ES and CMA-ME differ. At each step CMA-ES’s objective ranking maximizes the objective function f by approximating the natural gradient of f at the current solution point (Akimoto et al., 2010). In contrast, CMA-ME’s improvement ranking moves in the direction of the natural gradient of f − fA at the current solution point, where fA is a discount function equal to the objective of the best solution so far that has the same measure values as the current solution point. The function f − fA quantifies the gap between a candidate solution and the best solution so far at the candidate solution’s position in measure space.
Our key insight is to anneal the function fA by a learning rate α. We observe that when α = 0, then our discount function fA never increases and our algorithm behaves like CMA-ES. However, when
α = 1, then our discount function always maintains the best solution for each region in measure space and our algorithm behaves like CMA-ME. For 0 < α < 1, CMA-MAE smoothly blends between the two algorithms’ behavior, allowing for an algorithm that spends more time on the optimization of f before transitioning to exploration. Figure 1 is an illustrative example of varying the learning rate α.
Our proposed annealing method naturally addresses the flat objective limitation. Observe that both CMA-ES and CMA-ME struggle on flat objectives f as the natural gradient becomes 0 in this case and each algorithm will restart. However, we show that, when CMA-MAE optimizes f − fA for 0 < α < 1, the algorithm becomes a descent method on the density histogram defined by the archive.
Finally, CMA-ME’s poor performance on low resolution archives is likely caused by the nonstationary objective f − fA changing too quickly for the adaptation mechanism to keep up. Our archive learning rate α controls how quickly f − fA changes. We derive a conversion formula for α that allows us to derive equivalent α for different archive resolutions. Our conversion formula guarantees that CMA-MAE is the first QD algorithm invariant to archive resolution.
Overall, our work shows how a simple algorithmic change to CMA-ME addresses all three major limitations affecting CMA-ME’s performance and robustness. Our theoretical findings justify the aforementioned properties and inform our experiments, which show that CMA-MAE outperforms state-of-the-art QD algorithms and maintains robust performance across different archive resolutions.
2 PROBLEM DEFINITION
Quality Diversity. We adopt the quality diversity (QD) problem definition from Fontaine & Nikolaidis (2021a). A QD problem consists of an objective f : Rn → R that maps n-dimensional solution parameters to a scalar value denoting the quality of the solution and k measures mi : Rn → R or, as a vector function, m : Rn → Rk that quantify behavior or attributes of each solution1. The range of m forms a measure space S = m(Rn). The QD objective is to find a set of solutions θ ∈ Rn, such that m(θ) = s for each s in S and f(θ) is maximized.
The measure space S is continuous, but solving algorithms need to produce a finite collection of solutions. Therefore, QD algorithms in the MAP-Elites (Mouret & Clune, 2015; Cully et al., 2015) family relax the QD objective by discretizing the space S. Given T as the tessellation of S into M cells, the QD objective becomes to find a solution θi for each of the i ∈ {1, . . . ,M} cells, such that each θi maps to the cell corresponding to m(θi) in the tesselation T . The QD objective then becomes maximizing the objective value f(θi) of all cells:
max M∑ i=1 f(θi) (1)
The differentiable quality diversity (DQD) problem (Fontaine & Nikolaidis, 2021a) is a special case of the QD problem where both the objective f and measures mi are first-order differentiable.
1In agent-based settings, such as reinforcement learning, the measure functions are sometimes called behavior functions and the outputs of each measure function are called behavioral characteristics or behavior descriptors.
CMA-ES CMA-MECMA-MAE
3 PRELIMINARIES
We present several QD algorithms that solve derivative-free QD problems to provide context for our proposed CMA-MAE algorithm. Appendix D contains information about the DQD algorithm CMA-MEGA, which solves problems where exact gradient information is available.
MAP-Elites and MAP-Elites (line). The MAP-Elites QD algorithm produces an archive of solutions, where each cell in the archive corresponds to the provided tesselation T in the QD problem definition. The algorithm initializes the archive by sampling solutions from the solution space Rn from a fixed distribution. After initialization, MAP-Elites produces new solutions by selecting occupied cells uniformly at random and perturbing them with isotropic Gaussian noise: θ′ = θi + σN (0, I). For each new candidate solution θ′, the algorithm computes an objective f(θ′) and measures m(θ′). MAP-Elites places θ′ into the archive if the cell corresponding to m(θ′) is empty or θ′ obtains a better objective value f(θ′) than the current occupant. The MAP-Elites algorithm results in an archive of solutions that are diverse with respect to the measure function m, but also high quality with respect to the objective f . Vassiliades & Mouret (2018) proposed the MAP-Elites (line) algorithm by augmenting the isotropic Gaussian perturbation with a linear interpolation between two solutions θi and θj : θ′ = θi + σ1N (0, I) + σ2N (0, 1)(θi − θj). CMA-ME. Covariance Matrix Adaptation MAP-Elites (CMA-ME) (Fontaine et al., 2020) combines the archiving mechanisms of MAP-Elites with the adaptation mechanisms of CMA-ES Hansen (2016). Instead of perturbing archive solutions with Gaussian noise, CMA-ME maintains a multivariate Gaussian of search directionsN (0,Σ) and a search point θ ∈ Rn. The algorithm updates the archive by sampling λ solutions around the current search point θi ∼ N (θ,Σ). After updating the archive, CMA-ME ranks solutions via a two stage ranking. Solutions that discover a new cell are ranked by the objective ∆i = f(θi), and solutions that map to an occupied cell e are ranked by the improvement over the incumbent solution θe in that cell: ∆i = f(θi)− f(θe). CMA-ME prioritizes exploration by ranking all solutions that discover a new cell before all solutions that improve upon an existing cell. Finally, CMA-ME moves θ towards the largest improvement in the archive, according to the CMA-ES update rules. Fontaine & Nikolaidis (2021a) showed that the improvement ranking of CMA-ME approximates a natural gradient of a modified QD objective (see Eq. 1).
4 PROPOSED ALGORITHMS
We present the CMA-MAE algorithm. While we focus on CMA-MAE, the same augmentations apply to CMA-MEGA to form the novel CMA-MAEGA algorithm (see Appendix D).
CMA-MAE. CMA-MAE is an algorithm that adjusts the rate the objective f − fA changes. First, consider at a high level how CMA-ME explores the measure space and discovers high quality solutions. The CMA-ME algorithm maintains a solution point θ and an archive A with previously discovered solutions. When CMA-ME samples a new solution θ′, the algorithm computes the solution’s objective value f(θ′) and maps the solution to a cell e in the archive based on the measure m(θ′). CMA-ME then computes the improvement of the objective value f(θ′) of the new solution, over a discount function fA : Rn → R. In CMA-ME, we define fA(θ′) by computing the cell e in
the archive corresponding to m(θ′) and letting fA(θ′) = f(θe), where θe is the incumbent solution of cell e. The algorithm ranks candidate solutions by improvement f(θ′)− fA(θ′) = f(θ′)− f(θe) and moves the search in the direction of higher ranked solutions.
Assume that CMA-ME samples a new solution θ′ with a high objective value of f(θ′) = 99. If the current occupant θe of the corresponding cell has a low objective value of f(θe) = 0.3, then the improvement in the archive ∆ = f(θ′) − f(θe) = 98.7 is high and the algorithm will move the search point θ towards θ′. Now, assume that in the next iteration the algorithm discovers a new solution θ′′ with objective value f(θ′′) = 100 that maps to the same cell as θ′. The improvement then is ∆ = f(θ′′)− f(θ′) = 1 as θ′ replaced θe in the archive in the previous iteration. CMA-ME would likely move θ away from θ′′ as the solution resulted in low improvement. In contrast, CMA-ES would move towards θ′′ as it ranks only by the objective f , ignoring previously discovered solutions with similar measure values.
In the above example, CMA-ME moves away from high performing solutions in order to maximize how the archive changes. However, in domains with hard-to-optimize objective functions, it is beneficial to perform more optimization steps in high-performing regions (Tjanaka et al., 2022).
Like CMA-ME, CMA-MAE maintains a discount function fA(θ′) and ranks solutions by improvement f(θ′)− fA(θ′). However, instead of setting fA(θ′) equal to f(θe), we set fA(θ′) = te, where te is an acceptance threshold maintained for each cell in the archive A. When adding a candidate solution to the archive, we control the rate that te changes by the archive learning rate α as follows: te ← (1− α)te + αf(θ′). The archive learning rate α in CMA-MAE allows us to control how quickly we leave a highperforming region of measure space. For example, consider discovering solutions in the same cell with objective value 100 in 5 consecutive iterations. The improvement values computed by CMA-ME would be 100, 0, 0, 0, 0, thus CMA-ME would move rapidly away from this cell. The improvement values computed by CMA-MAE with α = 0.5 would diminish smoothly as follows: 100, 50, 25, 12.5, 6.25, enabling further exploitation of the high-performing region.
Next, we walk through the CMA-MAE algorithm step-by-step. Algorithm 1 shows the pseudo-code for CMA-MAE with the differences from CMA-ME highlighted in yellow. First, on line 2 we initialize the acceptance threshold to minf . In each iteration we sample λ solutions around the current search point θ (line 5). For each candidate solution θi, we evaluate the solution and compute the objective value f(θi) and measure values m(θi) (line 6). Next, we compute the cell e in the archive that corresponds to the measure values and the improvement ∆i over the current threshold te (lines 7-8). If the objective crosses the acceptance threshold te, we replace the incumbent θe in the archive and increase the acceptance threshold te (lines 9-11). Next, we rank all candidate solutions θi by their improvement ∆i. Finally, we step our search point θ and adapt our covariance matrix Σ towards the direction of largest improvement (lines 14-15) according to CMA-ES’s update rules (Hansen, 2016).
CMA-MAEGA. We note that our augmentations to the CMA-ME algorithm only affects how we replace solutions in the archive and how we calculate ∆i. CMA-ME and CMA-MEGA replace solutions and calculate ∆i identically, so we apply the same augmentations to CMA-MEGA to form a new DQD algorithm, CMA-MAEGA, in Appendix D.
5 THEORETICAL PROPERTIES OF CMA-MAE
We provide insights about the behavior of CMA-MAE for different α values. We include all proofs in Appendix E. CMA-MAEGA has similar theoretical properties discussed in Appendix F.
Theorem 5.1. The CMA-ES algorithm is equivalent to CMA-MAE when α = 0, if CMA-ES restarts from an archive solution.
The next theorem states that CMA-ME is equivalent to CMA-MAE when α = 1 with the following caveats: First, we assume that CMA-ME restarts only by the CMA-ES restart rules, rather than the additional “no improvement” restart rule in prior work (Fontaine et al., 2020). Second, we assume that both CMA-ME and CMA-MAE leverage µ selection (Hansen, 2016) rather than filtering selection (Fontaine et al., 2020).
Algorithm 1 Covariance Matrix Adaptation MAP-Annealing (CMA-MAE) CMA-MAE (evaluate,θ0, N, λ, σ, minf , α)
input : An evaluation function evaluate that computes the objective and measures, an initial solution θ0, a desired number of iterations N , a branching population size λ, an initial step size σ, a minimal acceptable solution quality minf , and an archive learning rate α.
result :Generate Nλ solutions storing elites in an archive A. 1 Initialize solution parameters θ to θ0, CMA-ES parameters Σ = σI and p, where we let p be the CMA-ES internal parameters. 2 Initialize the archive A and the acceptance threshold te with minf for each cell e. 3 for iter ← 1 to N do 4 for i← 1 to λ do 5 θi ∼ N (θ,Σ) 6 f,m← evaluate(θi) 7 e← calculate_cell(A,m) 8 ∆i ← f − te 9 if f > te then
10 Replace the current occupant in cell e of the archive A with θi 11 te ← (1− α)te + αf 12 end 13 end 14 rank θi by ∆i 15 Adapt CMA-ES parameters θ,Σ,p based on improvement ranking ∆i 16 if CMA-ES converges then 17 Restart CMA-ES with Σ = σI . 18 Set θ to a randomly selected existing cell θi from the archive 19 end 20 end
Theorem 5.2. The CMA-ME algorithm is equivalent to CMA-MAE when α = 1 and minf is an arbitrarily large negative number.
We next provide theoretical insights on how the discount function fA smoothly increases from a constant function minf to the discount function used by CMA-ME, as α increases from 0 to 1. We focus on the special case of a fixed sequence of candidate solutions. Theorem 5.3. Let αi and αj be two archive learning rates for archives Ai and Aj such that 0 ≤ αi < αj ≤ 1. For two runs of CMA-MAE that generate the same sequence of m candidate solutions {S} = θ1,θ2, ...,θm, it follows that fAi(θ) ≤ fAj (θ) for all θ ∈ Rn.
Finally, we wish to provide insights about the exploration properties of CMA-MAE for an archive learning rate α between 0 and 1, when the objective f is constant. Consider an approximate density descent algorithm that is identical to CMA-ME, but differs by how solutions are ranked. Specifically, we assume that this algorithm maintains a density histogram of the occupancy counts oe for each cell e, with oe representing the number of times a solution was generated in that cell. This algorithm descends the density histogram by ranking solutions based on the occupancy count of the cell that the solution maps to, where solutions that discover less frequently visited cells are ranked higher. Theorem 5.4. The CMA-MAE algorithm optimizing a constant objective function f(θ) = C for all θ ∈ Rn is equivalent to the approximate density descent algorithm, when 0 < α < 1 and minf < C.
While Theorem 5.4 assumes a constant objective f , we conjecture that the theorem holds true generally when threshold te in each cell e approaches the local optimum within the cell boundaries.
6 EXPERIMENTS
We compare the performance of CMA-MAE with the state-of-the-art QD algorithms MAP-Elites, MAP-Elites (line), and CMA-ME, using existing Pyribs (Tjanaka et al., 2021) QD library implementations. We set α = 0.01 for CMA-MAE and include additional experiments for varying α
in section 7. Because annealing methods replace solutions based on the threshold, we retain the best solution in each cell for comparison purposes. We include additional comparisons between CMA-MEGA and CMA-MAEGA – the gradient-based counterpart of CMA-MAE – in Appendix K.
We select the benchmark domains from Fontaine & Nikolaidis (2021a): linear projection (Fontaine et al., 2020), arm repertoire (Cully & Demiris, 2017), and latent space illumination (Fontaine et al., 2021b). To evaluate the good exploration properties of CMA-MAE on flat objectives, we introduce a variant of the linear projection domain to include a “plateau” objective function that is constant everywhere for solutions within a fixed range and has a quadratic penalty for solutions outside the range. We describe the domains in detail in Appendix B.
6.1 EXPERIMENT DESIGN
Independent Variables. We follow a between-groups design with two independent variables: the algorithm and the domain.
Dependent Variables. We use the sum of f values of all cells in the archive, defined as the QD-score Pugh et al. (2015), as a metric for the quality and diversity of solutions. Following Fontaine & Nikolaidis (2021a), we normalize the QD-score metric by the archive size (the total number of cells from the tesselation of measure space) to make the metric invariant to archive resolution. We additionally compute the coverage, defined as the number of occupied cells in the archive divided by the total number of cells.
6.2 ANALYSIS
Table 1 shows the QD-score and coverage values for each algorithm and domain, averaged over 20 trials for the linear projection (LP) and arm repertoire domains and over 5 trials for the LSI domain. Fig. 3 shows the QD-score values for increasing number of iterations and example archives for CMA-MAE and CMA-ME, with 95% confidence intervals.
We conducted a two-way ANOVA to examine the effect of the algorithm and domain (LP (sphere), LP (Rastrigin), LP (plateau), arm repertoire, and LSI) on the QD-score. There was a significant interaction between the search algorithm and the domain (F (12, 320) = 1958.34, p < 0.001). Simple main effects analysis with Bonferroni corrections showed that CMA-MAE outperformed all baselines in all benchmark domains.
For the arm repertoire domain, we can compute the optimal archive coverage by testing whether each cell overlaps with a circle of radius equal to the maximum arm length (see Appendix B). We observe that CMA-MAE approaches the computed optimal coverage 80.24% for a resolution of 100× 100 and outperforms CMA-MEGA (Fontaine & Nikolaidis, 2021a) (see Appendix K).
These results show that the archive learning rate α is particularly beneficial for CMA-MAE. We observe that CMA-MAE initially explores regions of the measure space that have high-objective values. Once the archive becomes saturated, CMA-MAE reduces to approximate density descent, as we prove in Theorem 5.4 for flat objectives. On the other hand, CMA-ME does not receive any exploration signal when the objective landscape becomes flat, resulting in poor performance.
While our results show improved quantitative results on the LSI domain, Appendix I discusses how to improve the visual quality by leveraging techniques from the generative art community. Fig. 4 shows an example collage generated by adopting improvements for guiding StyleGAN with CLIP.
7 ON THE ROBUSTNESS OF CMA-MAE
Next, we present two studies that evaluate the robustness of CMA-MAE across two hyperparameters that may affect algorithm performance: the archive learning rate α and the archive resolution.
Archive Learning Rate. We examine the effect of different archive learning rates on the performance of CMA-MAE in the linear projection and arm repertoire domains. We vary the learning rate from 0 to 1 on an exponential scale, while keeping the resolution constant in each domain.
Table 2 shows that running CMA-MAE with the different 0 < α < 1 results in relatively similar performance, showing that CMA-MAE is fairly robust to α values. On the other hand, if α = 0 or α = 1 the performance drops drastically. Setting α = 1 results in very similar performance with CMA-ME, which supports our insight from Theorem 5.2.
Archive Resolution. As noted by Cully (2021) and Fontaine & Nikolaidis (2021a), quality diversity algorithms in the MAP-Elites family sometimes perform differently when run with different archive resolutions. For example, in the linear projection domain presented in Fontaine et al. (2020), CMA-ME outperformed MAP-Elites and MAP-Elites (line) for archives of resolution 500 × 500, while in this paper we observe that it performs worse for resolution 100 × 100. In this study, we investigate how CMA-MAE performs at different archive resolutions.
First, we note that the optimal archive learning rate α is dependent on the resolution of the archive. Consider as an example a sequence of solution additions to two archives A1 and A2 of resolution 100× 100 and 200× 200, respectively. A2 subdivides each cell in A1 into four cells, thus archive A2’s thresholds te should increase at a four times faster rate than A1. To account for this difference, we compute α2 for A2 via a conversion formula α2 = 1− (1− α1)r (see derivation in Appendix G), where r is the ratio of cell counts between archives A1 and A2. We initialize α1 = 0.01 for A1. In the above example, α2 = 1− (1− 0.01)4 = 0.0394.
Fig. 5 shows the QD-score of CMA-MAE with the resolution-dependent archive learning rate and the baselines for each benchmark domain. CMA-ME performs worse as the resolution decreases because the archive changes quickly at small resolutions, affecting CMA-ME’s adaptation mechanism. On the contrary, MAP-Elites and MAP-Elites (line) perform worse as the resolution increases due to having more elites to perturb. CMA-MAE’s performance is invariant to the resolution of the archive.
8 RELATED WORK
Quality Diversity Optimization. The predecessor to quality diversity optimization, simply called diversity optimization, originated with the Novelty Search algorithm (Lehman & Stanley, 2011a), which searches for a collection of solutions that are diverse in measure space. Later work introduced the Novelty Search with Local Competition (NSLC) (Lehman & Stanley, 2011b) and MAP-Elites (Cully et al., 2015; Mouret & Clune, 2015) algorithms, which combined single-objective optimization with diversity optimization and were the first QD algorithms. Since then, several QD algorithms have been proposed, based on a variety of single-objective optimization methods, such as Bayesian optimization (Kent & Branke, 2020), evolution strategies (Conti et al., 2018; Colas et al., 2020; Fontaine et al., 2020), differential evolution (Choi & Togelius, 2021), and gradient ascent (Fontaine & Nikolaidis, 2021a). Several works have improved selection mechanisms (Sfikas et al., 2021; Cully & Demiris, 2017), archives (Fontaine et al., 2019; Vassiliades et al., 2018; Smith et al., 2016), and perturbation operators (Vassiliades & Mouret, 2018; Nordmoen et al., 2018).
QD with Gradient Information. Several works combine gradient information with quality diversity optimization in ways that do not leverage the objective and measure gradients directly. For example, in model-based quality diversity optimization (Gaier et al., 2018; Hagg et al., 2020; Cazenille et al., 2019; Keller et al., 2020; Lim et al., 2021; Zhang et al., 2021; Gaier et al., 2020), Rakicevic et al. (2021) trains an autoencoder on the archive of solutions and leverages the Jacobian of the decoder network to compute the covariance of the Gaussian perturbation. In quality diversity reinforcement learning (QD-RL), several works (Parker-Holder et al., 2020; Pierrot et al., 2020; Nilsson & Cully, 2021; Tjanaka et al., 2022) approximate a reward gradient or diversity gradient via a critic network, action space noise, or evolution strategies and incorporate those gradients into a QD-RL algorithm.
Acceptance Thresholds. Our proposed archive learning rate α was loosely inspired by simulated annealing methods (Bertsimas & Tsitsiklis, 1993) that maintain an acceptance threshold that gradually becomes more selective as the algorithm progresses. The notion of an acceptance threshold is also closely related to minimal criterion methods in evolutionary computation (Lehman & Stanley, 2010; Brant & Stanley, 2017; 2020; Stanley et al., 2016). Our work differs by both 1) maintaining an acceptance threshold per archive cell rather than a global threshold and 2) annealing the threshold.
9 LIMITATIONS AND FUTURE WORK
Our approach introduced two hyperparameters, α and minf , to control the rate that f − fA changes. We observed that an α set strictly between 0 and 1 yields theoretical exploration improvements and that CMA-MAE is robust with respect to the exact choice of α. We additionally derived a conversion formula that converts an α1 for a specific archive resolution to an equivalent α2 for a different resolution. However, the conversion formula still requires practitioners to specify a good initial value of α1. Future work will explore ways to automatically initialize α, similar to how CMA-ES automatically assigns internal parameters (Hansen, 2016).
Quality diversity optimization is a rapidly growing branch of stochastic optimization with applications in generative design (Hagg et al., 2021; Gaier et al., 2020; 2018), automatic scenario generation in robotics (Fontaine & Nikolaidis, 2021c; Fontaine et al., 2021a; Fontaine & Nikolaidis, 2021b), reinforcement learning (Parker-Holder et al., 2020; Pierrot et al., 2020; Nilsson & Cully, 2021; Tjanaka et al., 2022), damage recovery in robotics (Cully et al., 2015), and procedural content generation (Gravina et al., 2019; Fontaine et al., 2021b; Zhang et al., 2021; Earle et al., 2021; Khalifa et al., 2018; Steckel & Schrum, 2021; Schrum et al., 2020; Sarkar & Cooper, 2021; Bhatt et al., 2022). Our paper introduces a new quality diversity algorithm, CMA-MAE. Our theoretical findings inform our experiments, which show that CMA-MAE addresses three major limitations affecting the CMA-ME algorithm, leading to state-of-the-art performance.
10 ETHICS STATEMENT
By controlling the trade-off between exploration and exploitation in QD algorithms, we aim towards improving their performance and robustness, thus making these algorithms easier to apply in a wide range of domains and applications. One promising application is synthetically extracting datasets from generative models to train machine learning algorithms Jahanian et al. (2021); Besnier et al. (2020). This can raise ethical considerations because generative models can reproduce and exacerbate existing biases in the datasets that they were trained on (Jain et al., 2020; Menon et al., 2020). On the other hand, quality diversity algorithms with carefully selected measure functions can target diversity with desired attributes, thus we hypothesize that they can be effective in generating balanced datasets. Furthermore, by attempting to find diverse solutions, QD algorithms are a step towards open-endedness in AI Stanley et al. (2017) and will often result in unexpected and often surprising emergent behaviors (Lehman et al., 2020). We recognize that this presents several challenges in predictability and monitoring of AI systems (Hendrycks et al., 2021), and we highlight the importance of future work on balancing the tradeoff between open-endedness and control (Ecoffet et al., 2020).
11 REPRODUCIBILITY STATEMENT
In the supplemental material we provide complete source code for all algorithms and experiments, as well as the Conda environments for installing project dependencies. The “README.md” document provides complete instructions both setup and execution of all experiments. In Appendix A we provide all hyperparameters. In Appendix B we provide domain-specific details for replicating all experimental domains. In Appendix C we provide information about the computational resources and hardware we used to run our experiments. In Appendix D we provide the pseudocode for the CMA-MAEGA algorithm, the DQD counterpart of CMA-MAE. In Appendix E we provide the proofs of all theorems in the paper. In Appendix F we provide the theoretical properties of CMA-MAEGA. In Appendix G we provide the derivation of the conversion formula for the archive learning rate. In Appendix H we provide a batch threshold update rule that is invariant to the order that the solutions are processes within a batch update. In Appendix I we discuss the implementation details for additional experiments that improve the quality of the generated images in the latent space illumination domain. In Appendix K we present all metrics with standard errors for each algorithm and domain.
APPENDIX
A HYPERPARAMETER SELECTION
For all domains we mirror the hyperparameter selection of Fontaine & Nikolaidis (2021a). For CMA-MAE and CMA-MAEGA, we duplicate the hyperparameter selections of CMA-ME and CMAMEGA, respectively. Following Fontaine et al. (2020), we run all algorithms with 15 emitters on the linear projection and arm repertoire domains. In the latent space illumination domain, we run experiments with only one emitter, due to the computational expense of the domain. Emitters are independent CMA-ES instances that run in parallel with a shared archive. For each algorithm, we select a batch size λ = 36 following Fontaine & Nikolaidis (2021a). For MAP-Elites and MAP-Elites (line), we initialize the archive with 100 random solutions, sampled from the distribution N (0, I). These initial solutions do not count in the evaluation budget for MAP-Elites and MAP-Elites (line). For algorithms in the CMA-ME family (CMA-ME, CMA-MAE, CMA-MEGA, and CMA-MAEGA), we initialize θ0 = 0 for every domain.
In our experiments we want to directly compare the ranking mechanisms of CMA-ME and CMA-MAE. However, CMA-ME is typically run with a “no improvement” restart rule, where the algorithm will restart if no solution changes the archive. Due to CMA-MAE’s annealed acceptance threshold te, a “no improvement” restart rule would cause CMA-ME and CMA-MAE to restart at different rates, confounding the effects of restarts and rankings. Filter selection also has a similar confounding effect as solutions are selected if they change the archive. For these reasons, in the main paper we run CMA-ME with a basic restart rule (CMA-ES style restarts only (Hansen, 2016)) and µ selection (Hansen, 2016) (selecting the top half of the ranking). In Appendix Section K, we run an extra CMA-ME with filter selection and the “no improvement” restart rule, which we denote CMA-ME*. We include, as an additional baseline, a configuration of CMA-ME that mixes emitters that optimize only for the objective with emitters that optimize for improvement, a configuration first studied by Cully (2021). We refer to this configuration as CMA-ME (imp, opt).
In the latent space illumination domain, due to the computational expense of the domain, we compare directly against the results from Fontaine & Nikolaidis (2021a), where we obtained the data (MIT license) with consent from the authors. For CMA-MAE and CMA-MAEGA we include the “no improvement” restart rule to match CMA-ME and CMA-MEGA as closely as possible. For this domain, we take gradient steps with the Adam optimizer (Kingma & Ba, 2015), following the recommendation of Fontaine & Nikolaidis (2021a). However, we run CMA-MAE with µ selection, since we found that small values of the archive learning rate α makes filter selection worse.
In Appendix I, we describe a second LSI experiment on StyleGAN2 (Karras et al., 2020b) configured by insights from the generative art community that improve the quality of single-objective latent space optimization. For this domain, we configure CMA-MAEGA and CMA-MEGA to use a “basic” restart rule because the latent space L2 regularization keeps solutions in the StyleGAN2 training distribution. For this experiment, the latent space is large (n = 9216), so we exclude CMA-ME and CMA-MAE due to the size of the covariance matrix (9216× 9216) and the prohibitive cost for computing an eigendecomposition of a large covariance matrix.
Linear Projection (sphere, Rastrigin, plateau).
• MAP-Elites: σ = 0.5
• MAP-Elites (line): σ1 = 0.5, σ2 = 0.2
• CMA-ME: σ = 0.5, µ selection, basic restart rule
• CMA-ME*: σ = 0.5, filter selection, no improvement restart rule
• CMA-ME (imp, opt): σ = 0.5, µ selection, basic restart rule, 7 optimizing and 8 improvement emitters
• CMA-MAE: σ = 0.5, α = 0.01, minf = 0, µ selection, basic restart rule
• CMA-MEGA: σg = 10.0, η = 1.0, basic restart rule, gradient ascent optimizer
• CMA-MAEGA: σg = 10.0, η = 1.0, α = 0.01, minf = 0, basic restart rule, gradient ascent optimizer
Arm Repertoire.
• MAP-Elites: σ = 0.1 • MAP-Elites (line): σ1 = 0.1, σ2 = 0.2 • CMA-ME: σ = 0.2, µ selection, basic restart rule • CMA-ME*: σ = 0.2, filter selection, no improvement restart rule • CMA-ME (imp, opt): σ = 0.2, µ selection, basic restart rule,
7 optimizing and 8 improvement emitters • CMA-MAE: σ = 0.2, α = 0.01, minf = 0, µ selection, basic restart rule • CMA-MEGA: σg = 0.05, η = 1.0, basic restart rule, gradient ascent optimizer • CMA-MAEGA: σg = 0.05, η = 1.0, α = 0.01, minf = 0, basic restart rule, gradient
ascent optimizer
Latent Space Illumination. (StyleGAN)
• MAP-Elites: σ = 0.2 • MAP-Elites (line): σ1 = 0.1, σ2 = 0.2 • CMA-ME: σ = 0.02, filter selection, no improvement restart rule • CMA-MAE: σ = 0.02, α = 0.1, minf = 55, µ selection, no improvement restart rule, 50
iteration timeout • CMA-MEGA: σg = 0.002, η = 0.002, Adam optimizer, no improvement restart rule • CMA-MAEGA: σg = 0.002, η = 0.002, α = 0.1, minf = 55, Adam optimizer, no
improvement restart rule, 50 iteration timeout
Latent Space Illumination. (StyleGAN 2)
• MAP-Elites: σ = 0.1 • MAP-Elites (line): σ1 = 0.1, σ2 = 0.2 • CMA-MEGA: σg = 0.01, η = 0.05, Adam optimizer, basic restart rule • CMA-MAEGA: σg = 0.01, η = 0.05, α = 0.02, minf = 0, Adam optimizer, basic restart
rule
Adam Hyperparameters. We use the same hyperparameters as previous work Perez (2021); Fontaine & Nikolaidis (2021a).
• β1 = 0.9 • β2 = 0.999
Archives. For the linear projection and arm repertoire domains, we initialize an archive of 100× 100 cells for all algorithms. For latent space illumination we initialize an archive of 200× 200 cells for all algorithms, following Fontaine & Nikolaidis (2021a).
B DOMAIN DETAILS
To experimentally evaluate both CMA-MAE and CMA-MAEGA, we select domains from Fontaine & Nikolaidis (2021a): linear projection (Fontaine et al., 2020), arm repertoire (Cully & Demiris, 2017), and latent space illumination (Fontaine et al., 2021b). While many quality diversity optimization domains exist, we select these because gradients of f and m are easy to compute analytically and allow us to evaluate DQD algorithms in addition to derivative-free QD algorithms. To evaluate the good exploration properties of CMA-MAE on flat objectives, we introduce a variant of the linear projection domain to include a “plateau” objective function.
Linear Projection. The linear projection domain (Fontaine et al., 2020) was introduced to benchmark distortions caused by mapping a high-dimensional search space to a low-dimensional measure space.
The domain forms a 2D measure space by a linear projection that bounds the contribution of each component θi of the projection to the range [−5.12, 5.12]. QD algorithms must adapt the step size of each component θi to slowly approach the extremes of the measure space, with a harsh penalty for components outside [−5.12, 5.12]. As QD domains must provide an objective, the linear projection domain included two objectives from the black-box optimization benchmarks (Hansen et al., 2016; 2010): sphere and Rastrigin. Following Fontaine et al. (2020), we run all experiments for n = 100.
Formally, the measure functions are defined as a linear projection, a weighted sum of the components θi ∈ R of a solution θ ∈ Rn. The first measure function m1 is a weighted sum of the first half of the solution θ, and the second measure function m2 is a weighted sum of the second half of the solution θ (see Eq. 3). To ensure that all solutions mapped to measure space occupy a finite volume, the contribution in measure space of each component θi is bounded to the range [−5.12, 5.12] via a clip function (see Eq. 2) that applies a harsh penalty for solution components θi stepping outside the range [−5.12, 5.12].
clip(θi) = { θi if −5.12 ≤ θi ≤ 5.12 5.12/θi otherwise
(2)
m(θ) = ⌊n2 ⌋∑ i=1 clip(θi), n∑ i=⌊n2 ⌋+1 clip(θi) (3) Fig. 6 visualizes why the linear projection domain is challenging. First, we note that the density of solutions in search space mapped to measure space mostly occupies the region close to 0. To justify why, consider sampling uniformly in the hypercube [−5.12, 5.12]n in search space. We note that each of these points maps to the linear region of the measure functions and each of our measures becomes a sum of random variables. If we divide by n, we normalize by the dimensions of the search space, then the measure functions become an average of random variables. The average of n uniform random variables is the Bates distribution (Johnson et al., 1995), a distribution that narrows in variance as n grows larger. Without the clip function, a QD algorithm could simply increase a single θi to reach any point in the measure space. However, the clip function prevents this by bounding the contribution of each component of θ to the range [−5.12, 5.12]. To reach the extremes of measure space all components θi must converge to the extremums ±5.12. The linear projection domain is challenging to explore due to both the clustering of solutions in a small region of measure space and the heavy measure space penalties applied by the clip function when a component θi leaves the region [−5.12, 5.12]. Next, we describe the linear projection domain’s objective functions visualized in Fig. 7.
The objectives of the linear projection domain satisfy the requirements that a QD domain needs to have an objective and are of lesser importance than the measure function definitions, since the benchmark primarily evaluates exploration capabilities. Fontaine et al. (2020) selected two objectives from the black-box optimization benchmarks competition (Hansen et al., 2016; 2010): sphere and Rastrigin. The sphere function (Eq. 4) is a quadratic function2, while the Rastrigin function (Eq. 5) is a multi-modal function that when smoothed is quadratic. The domain shifts the global optimum to the position θi = 5.12 · 0.4 = 2.048.
fsphere(θ) = n∑ i=1 θ2i (4)
fRastrigin(θ) = 10n+ n∑ i=1 [θ2i − 10 cos(2πθ2i )] (5)
We introduce an additional objective to evaluate the good exploration properties of CMA-MAE on flat objectives. Our “plateau” objective function (Eq. 7) is constant everywhere, but with a quadratic penalty for each component outside the range [−5.12, 5.12]. The penalty acts as a regularizer to encourage algorithms to search in the linear region of measure space.
fplateau(θi) = { 0 if −5.12 ≤ θi ≤ 5.12 (|x| − 5.12)2 otherwise (6)
fplateau(θ) = 1
n n∑ i=1 fplateau(θi) (7)
Arm Repertoire. The arm repertoire domain Cully & Demiris (2017); Vassiliades & Mouret (2018) tasks QD algorithms to find a diverse collection of arm positions for an n-dimensional planar robotic arm with revolute joints. The measures in this domain are the 2D coordinates of the robot’s end-effector and the objective is to minimize the variance of the joint angles.
In Fig. 8, we visualize example arms for n = 5 (5-DOF). The optimal solutions in this domain have 0 variance between all joint angles. The measure functions are bounded to the range [−n, n] as each arm segment has a unit length. The reachable cells form a circle of radius n. Therefore, the optimal archive coverage is approximately πn 2
4n2 ≈ 78.5%. An archive can achieve an upper-bound of this ratio that becomes tighter at higher resolutions. We select n = 100 (100-DOF) arms for the experiments.
Latent Space Illumination. Prior work introduced the latent space illumination problem (Fontaine et al., 2021b), the problem of searching the latent space of a generative model with a quality diversity algorithm. We evaluate on the StyleGAN+CLIP version of this problem (Fontaine & Nikolaidis,
2In derivative-free optimization many of the benchmark functions are named after the shape of the contour lines. In the case of quadratic functions with an identity Hessian matrix, the contour lines form hyperspheres.
2021a), by searching the latent space of StyleGAN (Karras et al., 2019) with a QD algorithm. We form the differentiable objective and measures in this domain by specifying text prompts to the CLIP model (Radford et al., 2021) that can determine the similarity of an image and text. We specify an objective prompt of “A photo of Beyonce”. For measures, we would like to have CLIP quantify abstract concepts like the hair length or age of the person in the photo. However, CLIP can only determine similarity of an image and a text prompt. As surrogates for age and hair length, we specify the measure prompts of “A small child” and “A woman with long blonde hair”. The objective and measure functions guide the QD algorithms towards discovering a collection of photos of Beyoncé with varying age and hair length.
For our additional LSI experiment on StyleGAN2 with setup improvements, see Appendix I.
Transformations of the Objective Function. We highlight two issues that must be addressed by transforming the objective in each domain. First, we note that the problem definition in each of our domains contains an objective f that must be minimized. In contrast, the QD problem definition specifies an objective f that must be maximized. Second, the QD-score metric, which measures the performance of QD algorithms, requires a non-negative objective function. Following prior work (Fontaine et al., 2020; Fontaine & Nikolaidis, 2021a), we transform the objective f via a linear transformation: f ′ = af + b. The linear transformation maps function outputs to the range [0, 100].
In the linear projection domain, we estimate the largest objective value for the sphere and Rastrigin function within the region [−5.12, 5.12] for each solution component θi. We compute f(−5.12,−5.12, ...,−5.12) for each objective as the maximum. The minimum of each function is 0. We calculate the linear transformation as:
f ′(θ) = 100 · f(θ)− fmax fmin − fmax
(8)
For our new plateau objective, all solution points within the region [−5.12, 5.12]n have objective value of 0. For this objective we set fmin = 0 and fmax = 100 and apply the transformation in Eq. 8.
For the arm domain we select fmin = 0 and fmax = 1, and in the LSI domain we select fmin = 0 and fmax = 10. We select these values to match Fontaine & Nikolaidis (2021a).
C IMPLEMENTATION
We replicate the implementation details of prior work (Fontaine & Nikolaidis, 2021a).
Archives. For the linear projection and arm repertoire domains, we initialize an archive of 100× 100 cells for all algorithms. For latent space illumination we initialize an archive of 200× 200 cells for all algorithms, following previous work (Fontaine & Nikolaidis, 2021a).
Metrics. We use the sum of f values of all cells in the archive, defined as the QD-score Pugh et al. (2015), as a metric for the quality and diversity of solutions. Following Fontaine & Nikolaidis (2021a), we normalize the QD-score by the total number of cells, both occupied and unoccupied, to make QD-score invariant to the resolution of the archive. We additionally compute the coverage, defined as the number of occupied cells in the archive divided by the total number of cells.
Computational Resources. We ran all trials of the linear projection and arm repertoire domains on an AMD Ryzen Threadripper 32-core (64 threads) processor. A run of 20 trials in parallel takes about 20 minutes for the linear projection domain and 25 minutes for the arm repertoire domain. For the latent space illumination domain, we accelerate the StyleGAN+CLIP pipeline on a GeForce RTX 3090 Nvidia GPU. One trial for latent space illumination takes approximately 2 hours and 30 minutes for StyleGAN and approximately 3 hours and 30 minutes for StyleGAN2. In all domains, runtime increases when an algorithm obtains better coverage, because we iterate over the archive when QD statistics are calculated.
Software Implementation. We use the open source Pyribs (Tjanaka et al., 2021) library for all algorithms. We implemented the CMA-MAE and CMA-MAEGA algorithms using the same library.
D COVARIANCE MATRIX ADAPTATION MAP-ELITES VIA A GRADIENT
ARBORESCENCE (CMA-MAEGA)
In this section, we provide information of the CMA-MEGA differentiable quality diversity (DQD) algorithm, and we derive CMA-MAE’s DQD counterpart: CMA-MAEGA.
CMA-MEGA. Covariance Matrix Adaptation MAP-Elites via Gradient Arborescence (CMA-MEGA) solves the DQD problem, where the objective f and measures m are first-order differentiable. Like CMA-ME, the algorithm maintains a solution point θ ∈ Rn and a MAP-Elites archive. CMA-MEGA samples new solutions by perturbing the search point θ via the objective and measure gradients. However, the contribution of each gradient is balanced by gradient coefficients c: θi = θ + c0∇f(θ) + ∑k j=1 cj∇mj(θ). These coefficients are sampled from a multivariate Gaussian distribution N(µ,Σ) maintained by the algorithm. After sampling new candidate solutions θi, the solutions are ranked via the improvement ranking from CMA-ME. CMA-MEGA updates N(µ,Σ) via the CMA-ES update rules and the algorithm steps θ also in the direction of largest archive improvement. The authors showed that CMA-MEGA approximates a natural gradient step of the QD objective (Eq. 1), but with respect to the gradient coefficients.
CMA-MAEGA. We note that our augmentations to the CMA-ME algorithm only affects how we replace solutions in the archive and how we calculate ∆i. Both CMA-ME and CMA-MAEGA replace solutions and calculate ∆i identically, so we apply the same augmentations from CMA-ME to CMA-MEGA to form a new DQD algorithm, CMA-MAEGA. Algorithm 2 shows the pseudo-code for CMA-MAEGA with the differences from CMA-MEGA highlighted in yellow.
Algorithm 2 Covariance Matrix Adaptation MAP-Annealing via a Gradient Arborescence (CMA-MAEGA) CMA-MAEGA (evaluate,θ0, N, λ, η, σg, minf , α)
input : An evaluation function evaluate that computes the objective, the measures, and the gradients of the objective and measures, an initial solution θ0, a desired number of iterations N , a branching population size λ, a learning rate η, an initial step size for CMA-ES σg , a minimal acceptable solution quality minf , and an archive learning rate α.
result :Generate N(λ+ 1) solutions storing elites in an archive A. 1 Initialize solution parameters θ to θ0, CMA-ES parameters µ = 0, Σ = σgI , and p, where we let p be the CMA-ES internal parameters. 2 Initialize the archive A and the acceptance threshold te with minf for each cell e. 3 for iter ← 1 to N do 4 f,∇f ,m,∇m ← evaluate(θ) 5 ∇f ← normalize(∇f ),∇m ← normalize(∇m) 6 if f > te then 7 Replace the current elite in cell e of the archive A with θi 8 te ← (1− α)te + αf 9 end
10 for i← 1 to λ do 11 c ∼ N (µ,Σ) 12 ∇i ← c0∇f + ∑k j=1 cj∇mj 13 θ′i ← θ +∇i 14 f ′, ∗,m′, ∗ ← evaluate(θ′i) 15 ∆i ← f ′ − te 16 if f ′ > te then 17 Replace the current occupant in cell e of the archive A with θi 18 te ← (1− α)te + αf ′ 19 end 20 end 21 rank ∇i by ∆i 22 ∇step ← ∑λ i=1 wi∇rank[i] 23 θ ← θ + η∇step 24 Adapt CMA-ES parameters µ,Σ,p based on improvement ranking ∆i 25 if there is no change in the archive then 26 Restart CMA-ES with µ = 0,Σ = σgI . 27 Set θ to a randomly selected existing cell θi from the archive 28 end 29 end
Experiments. We compare CMA-MEGA and CMA-MAEGA in the five benchmark domains. Table 3 and Table 4 shows the QD-score and coverage values for each algorithm and domain, averaged over 20 trials for the linear projection (LP) and arm repertoire domains and over 5 trials for the LSI domains. We conducted a two-way ANOVA to examine the effect of the algorithm and domain (LP (sphere), LP (Rastrigin), LP (plateau), arm repertoire, LSI (StyleGAN), and LSI (StyleGAN2) on the QD-score. There was a significant interaction between the search algorithm and the domain (F (5, 168) = 165.7, p < 0.001). Simple main effects analysis with Bonferroni corrections showed that CMAMAEGA outperformed CMA-MEGA in the LP (sphere), arm repertoire, and LSI (StyleGAN2) domains. There was no statistically significance difference between the two algorithms in the LP (Rastrigin), LP (plateau), and LSI (StyleGAN) domains.
We attribute the absence of a statistical difference in the QD-score between the two algorithms on the LP (Rastrigin) and LP (plateau) domains on the perfect coverage obtained by both algorithms. Thus, any differences in QD-score are based on the objective values of the solutions returned by each algorithm. In LP (plateau), the optimal objective for each cell is easily obtainable for both methods. The LP (Rastrigin) domain contains many local optima, because of the form of the objective function
(Eq. 5). CMA-MEGA will converge to these optima before restarting, behaving as a single-objective optimizer within each local optimum. Because of the large number of local optima in the domain, it results in higher QD-score.
In the LSI (StyleGAN) domain, we attribute similar performance between CMA-MEGA and CMA-MAEGA to the restart rules used to keep each search within the training distribution of StyleGAN. On the other hand, in the LSI (StyleGAN2) domain, we regularize the search space by an L2 penalty in latent space, allowing for a larger learning rate and a basic restart rule for both algorithms, while still preventing drift out of the training distribution of StyleGAN2. Because of the fewer restarts, CMA-MAEGA can take advantage of the density descent property, which was shown to improve exploation in CMA-MAE, and outperform CMA-MEGA. We note that because StyleGAN2 has a better conditioning on the latent space (Karras et al., 2020b), it is better suited for gradient-based optimizers, which helps better distinguish between the two algorithms.
E THEORETICAL PROPERTIES OF CMA-MAE
Theorem E.1. The CMA-ES algorithm is equivalent to CMA-MAE when α = 0, if CMA-ES restarts from an archive solution.
Proof. CMA-ES and CMA-MAE differ only on how they rank solutions. CMA-ES ranks solutions purely based on the objective f , while CMA-MAE ranks solutions by f − te, where te is the acceptance threshold initialized by minf . Thus, to show that CMA-ES is equivalent to CMA-MAE for α = 0, we only need to show that they result in identical rankings.
In CMA-MAE, te is updated as follows: te ← (1− α)te + αf . For α = 0, te = minf is invariant for the whole algorithm: te ← 1te + 0f = te. Therefore, CMA-MAE ranks solutions based on f −minf . However, comparison-based sorting is invariant to order-preserving transformations of the values being sorted Hansen (2016). Thus, CMA-ES and CMA-MAE rank solutions identically.
Next, we prove that CMA-ME is equivalent to CMA-MAE with the following caveats. First, we assume that CMA-ME restarts only with the CMA-ES restart rules, rather than the additional “no improvement” restart condition from Fontaine et al. (2020). Second, we assume that both CMA-ME and CMA-MAE leverage µ selection rather than filtering selection.
Lemma E.2. During execution of the CMA-MAE algorithm with α = 1, the threshold te is equal to f(θe) for cells that are occupied by a solution θe and to minf for all empty cells.
Proof. We will prove the lemma by induction. All empty cells are initialized with te = minf , satisfying the basis step. Then, we will show that if the statement holds after k archive updates, it will hold after a subsequent update k + 1.
Assume that at step k we generate a new solution θi mapped to a cell e. We consider two cases:
Case 1: The archive cell e is empty. Then, f(θi) > minf and both CMA-ME and CMA-MAE will place θi in the archive as the new cell occupant θe. The threshold te is updated as te = (1− α)te + αf(θe) = 0minf + 1f(θe) = f(θe). Case 2: The archive cell e contains an incumbent solution θe. Then, either f(θi) ≤ f(θe) or f(θi) > f(θe). If f(θi) ≤ f(θe), then the archive does not change and the inductive step holds via the inductive hypothesis. If f(θi) > f(θe), then θi becomes the new cell occupant θe and te is updated as te = (1− α)te + αf(θe) = 0te + 1f(θe) = f(θe).
Theorem E.3. The CMA-ME algorithm is equivalent to CMA-MAE when α = 1 and minf is an arbitrarily large negative number.
Proof. Both CMA-ME and CMA-MAE rank candidate solutions θi based on improvement values ∆i. While CMA-ME and CMA-MAE compute ∆i differently, we will show that for α = 1, the rankings are identical for the two algorithms.
We assume a new candidate solution mapped to a cell e. We describe first the computation of ∆i for CMA-ME. CMA-ME ranks solutions that discover an empty cell based on their objective value. Thus,
if θi discovers an empty cell, ∆i = f(θi). On the other hand, if θi is mapped to a cell occupied by another solution θe, it will rank θi based on the improvement ∆i = f(θi)− f(θe). CMA-ME performs a two-stage ranking, where it ranks all solutions that discover empty cells before solutions that improve occupied cells.
We now show the computation of ∆i for CMA-MAE with α = 1. If θi discovers an empty cell ∆i = f(θi) − te and by Lemma E.2 ∆i = f(θi) −minf . If θi is mapped to a cell occupied by another solution θe, ∆i = f(θi)− te and by Lemma E.2 ∆i = f(θi)− f(θe). Comparing the values ∆i between the two algorithms we observe the following: (1) If θi discovers an empty cell, ∆i = f(θi)−minf for CMA-MAE. However, minf is a constant and comparisonbased sorting is invariant to order preserving transformations (Hansen, 2016), thus ranking by ∆i = f(θi) − minf is identical to ranking by ∆i = f(θi) performed by CMA-ME. (2) If θi is mapped to a cell occupied by another solution θe, ∆i = f(θi) − f(θe) for both algorithms. (3) Because minf is an arbitrarily large negative number f(θi) −minf > f(θi) − f(θe). Thus, CMA-MAE will always rank solutions that discover empty cells before solutions that are mapped to occupied cells, identically to CMA-ME.
We next provide theoretical insights on how the discount function fA smoothly increases from a constant function minf to CMA-ME’s discount function as α increases from 0 to 1. We show this for the special case of a fixed sequence of candidate solutions. Theorem E.4. Let αi and αj be two archive learning rates for archives Ai and Aj such that 0 ≤ αi < αj ≤ 1. For two runs of CMA-MAE that generate the same sequence of m candidate solutions {S} = θ1,θ2, ...,θm, it follows that fAi(θ) ≤ fAj (θ) for all θ ∈ Rn.
Proof. We prove the theorem via induction over the sequence of solution additions. fA is the histogram formed by the thresholds te over all archive cells e in the archive. Thus, we prove fAi ≤ fAj by showing that te(Ai) ≤ te(Aj) for all archive cells e after m archive additions. As a basis step, we note that Ai equals Aj as both archives are initialized with minf .
Our inductive hypothesis states that after k archive additions we have te(Ai) ≤ te(Aj), and we need to show that te(Ai) ≤ te(Aj) after solution θk+1 is added to each archive. Our solution θk+1 has three cases with respect to the acceptance thresholds:
Case 1: f(θk+1) ≤ te(Ai) ≤ te(Aj). The solution is not added to either archive and our property holds from the inductive hypothesis.
Case 2: te(Ai) ≤ f(θk+1) ≤ te(Aj). The solution is added to Ai, but not Aj , thus t′e(Aj) = te(Aj). We follow the threshold update: t′e(Ai) = (1− αi)te(Ai) + αif(θk+1). Next, we need to show that t′e(Ai) ≤ t′e(Aj) to complete the inductive step:
(1− αi)te(Ai) + αif(θk+1) ≤ f(θk+1) ⇐⇒ (1− αi)te(Ai) ≤ (1− αi)f(θk+1) ⇐⇒
te(Ai) ≤ f(θk+1) as 1− αi ≥ 0
The last inequality holds true per our initial assumption for Case 2. From the inductive hypothesis, we have f(θk+1) ≤ te(Aj) = t′e(Aj). Case 3: te(Ai) ≤ te(Aj) ≤ f(θk+1). The solution θk+1 is added to both archives. We need to show that t′e(Ai) ≤ t′e(Aj):
t′e(Ai) ≤ t′e(Aj) ⇐⇒ (1− αi)te(Ai) + αif(θk+1) ≤ (1− αj)te(Aj) + αjf(θk+1) (9)
We can rewrite Eq. 9 as:
(1− αj)te(Aj)− (1− αi)te(Ai) + αjf(θk+1)− αif(θk+1) ≥ 0 (10)
First, note that:
(1− αj)te(Aj)− (1− αi)te(Ai) ≥ (1− αj)te(Ai)− (1− αi)te(Ai) = (1− αj − 1 + αi)te(Ai) = (αi − αj)te(Ai).
Thus: (1− αj)te(Aj)− (1− αi)te(Ai) ≥ (αi − αj)te(Ai) (11)
From Eq. 10 and 11 we have:
(1− αj)te(Aj) + αjf(θk+1)− (1− αi)te(Ai)− αif(θk+1) ≥ (αi − αj)te(Ai) + (αj − αi)f(θk+1) = (αj − αi)(f(θk+1)− te(Ai))
As αj > αi and f(θk+1) ≥ te(Ai), we have (αj − αi)(f(θk+1) − te(Ai)) ≥ 0. This completes the proof that Eq. 10 holds.
As all cases in our inductive step hold, our proof by induction is complete.
Next, we wish to provide insights about the exploration properties of CMA-MAE for an archive learning rate α between 0 and 1, when the objective f is constant. Consider an approximate density descent algorithm that is identical to CMA-ME, but differs by how solutions are ranked. Specifically, the algorithm maintains a histogram of occupancy counts oe for each cell e, with oe representing the number of times a solution was generated in that cell. This algorithm descends the density histogram by ranking solutions based on the occupancy count of the cell that the solution maps to, where solutions that discover less frequently visited cells are ranked higher.
Lemma E.5. The threshold te after k additions to cell e forms a strictly increasing sequence for a constant objective function f(θ) = C for all θ ∈ Rn, when 0 < α < 1 and minf < C.
Proof. To show that te after k additions to cell e forms a strictly increasing sequence, we write a recurrence relation for te after k solutions have been added to cell e. Let te(k) = (1 − α)te(k − 1) + αf(θi) and te(0) = minf be that recurrence relation. To show the recurrence is an increasing function, we need to show that te(k) > te(k − 1) for all k ≥ 0. We prove the inequality via induction over cell additions k. As a basis step, we show te(1) > te(0): (1− α)minf + αC > minf ⇐⇒ minf −minf − α ·minf + αC ⇐⇒ αC > α ·minf . As C > minf and α > 0, the basis step holds.
For the inductive step, we assume that te(k) > te(k − 1) and need to show that te(k + 1) > te(k): te(k + 1) > te(k) ⇐⇒ (1 − α)te(k) + αC > (1 − α)te(k − 1) + αC ⇐⇒ (1 − α)te(k) > (1− α)te(k − 1) ⇐⇒ te(k) > te(k − 1).
Theorem E.6. The CMA-MAE algorithm optimizing a constant objective function f(θ) = C for all θ ∈ Rn is equivalent to the approximate density descent algorithm, when 0 < α < 1 and minf < C.
Proof. We will prove that for an arbitrary archive A with both the occupancy count for each cell oe and the threshold value te computed with arbitrary learning rate 0 < α < 1, CMA-MAE results in the same ranking for an arbitrary batch of solutions {θi} as the approximate density descent algorithm. We let θi and θj be two arbitrary solutions in the batch mapped to cells ei and ej . Without of loss of generality, we let oei ≤ oej . The approximate density descent algorithm will thus rank θi before θj . We will show that CMA-MAE results in the same ranking.
If oei ≤ oej , and since te is a strictly increasing function from Lemma E.5: tei(oei) ≤ tej (oej ). We have tei(oei) ≤ tej (oej ) ⇐⇒ C − tei(oei) ≥ C − tej (oej ). Thus, the archive improvement by adding θi to the archive is larger than the improvement by adding θj and CMA-MAE will rank θi higher than θj , identically with density descent.
While Theorem E.6 assumes a constant objective f , we conjecture that the theorem holds true generally when threshold te in each cell e approaches the local optimum within the cell boundaries.
Conjecture E.7. The CMA-MAE algorithm becomes equivalent to the density descent algorithm for a subset of archive cells for an arbitrary convex objective f , where the cardinality of the subset of cells increases as the number of iterations increases.
We provide intuition for our conjecture through the lense of the elite hypervolume hypothesis (Vassiliades & Mouret, 2018). The elite hypervolume hypothesis states that optimal solutions for the MAP-Elites archive form a connected region in search space. Later work (Rakicevic et al., 2021), connected the elite hypervolume hypothesis to the manifold hypothesis (Fefferman et al., 2016) in machine learning, stating that the elite hypervolume can be represented by a low dimensional manifold in search space.
For our conjecture, we assume that the elite hypervolume hypothesis holds and there exists a smooth manifold that represents the hypervolume. Next, we assume in the conjecture that f is an arbitrary convex function. As f is convex, early in the CMA-MAE search the discount function fA will be flat and the search point θ will approach the global optimum following CMA-ES’s convergence properties (Hansen & Ostermeier, 1997; Hansen et al., 2003), where the precision of convergence is controlled by archive learning rate α. By definition, the global optimum θ∗ is within the elite hypervolume as no other solution of higher quality exists within its archive cell. Assuming the elite hypervolume hypothesis holds, a subset of adjacent solutions in search space will also be in the hypervolume due to the connectedness of the hypervolume. As fA increases around the global optimum, we conjecture that the function f(θ∗)− fA(θ∗) will form a plateau around the optimum, since it will approach the value f(θi)− fA(θi) of adjacent solutions θi. By Theorem E.6 we have a density descent algorithm within the plateau, pushing CMA-MAE to discover solutions on the frontier of the known hypervolume.
Finally, we remark that our conjecture implies that f − fA tends towards a constant function in the limit, resulting in a density descent algorithm across the elite hypervolume manifold as the number of generated solutions approaches infinity. We leave a formal proof of this conjecture for future work.
F THEORETICAL PROPERTIES OF CMA-MAEGA
In this section, we investigate how the theoretical properties of CMA-MAE apply to CMA-MAEGA. While many of the properties are nearly a direct mapping, we note that, while CMA-MAE is equivalent to the single-objective optimization algorithm CMA-ES for α = 0, there is no single-objective counterpart to CMA-MAEGA. To make the direct mapping easier, we introduce a counterpart: the gradient arborescence ascent algorithm.
The gradient arborescence ascent algorithm is similar to CMA-MEGA, but without an archive. Like CMA-MEGA, the algorithm assumes a differentiable objective f and differentiable measures m. However, the algorithm leverages the objective and measure function gradients only to improve the optimization of the objective f , rather than to find solutions that are diverse with respect to measures m. As with CMA-MEGA, the gradient arborescence algorithm branches in objective-measure space. However, the algorithm ranks solutions purely by the objective function f and adapts the coefficient distribution N(µ,Σ) towards the natural gradient of the objective f .
Next, we prove properties of CMA-MAEGA that directly follow from the properties of CMA-MAE.
Theorem F.1. The gradient arborescence ascent algorithm is equivalent to CMA-MAEGA when α = 0, if gradient arborescence ascent restarts from an archive elite.
Proof. We note that CMA-MAEGA and the gradient arborescence ascent algorithm differ only in how they rank solutions, and we note that the differences between CMA-MAE and CMA-ES mirror the differences between CMA-MAEGA and gradient arborescence ascent algorithm. So by directly adapting the proof of Theorem E.1, we complete our proof.
Theorem F.2. The CMA-MEGA algorithm is equivalent to CMA-MAEGA when α = 1 and minf is an arbitrarily large negative number.
Proof. We note that CMA-MAEGA and the CMA-MEGA algorithm differ only in how they rank solutions and how they update the archive A, and we note that the differences between CMA-MAE and CMA-ME mirror the differences between CMA-MAEGA and CMA-MEGA. So by directly adapting the proof of Theorem E.3, we complete our proof.
Theorem F.3. Let αi and αj be two archive learning rates for archives Ai and Aj such that 0 ≤ αi < αj ≤ 1. For two runs of CMA-MAEGA that generate the same sequence of m candidate solutions {S} = θ1,θ2, ...,θm, it follows that fAi(θ) ≤ fAj (θ) for all θ ∈ Rn.
Proof. We note that CMA-MAE and CMA-MAEGA update the archive A in exactly the same way. Therefore, the proof follows directly by adapting the proof of Theorem E.4 to CMA-MAEGA.
Next, we wish to show that CMA-MAEGA results in density descent in measure space. However, we need a counterpart to the approximate density descent algorithm we defined in Theorem E.6.
Consider an approximate density descending arborescence algorithm that is identical to CMA-MEGA, but differs by how solutions are ranked. Specifically, we assume that this algorithm maintains an occupancy count oe for each cell e, which is the number of times a solution was generated in that cell. The density descent algorithm ranks solutions based on the occupancy count of the cell that the solution maps to, where solutions that discover less frequently visited cells are ranked higher. The algorithm takes steps in search space Rn that minimize the approximate density function defined by the archive and adapts the coefficient distribution N(µ,Σ) towards coefficients that minimize the density function. Theorem F.4. The CMA-MAEGA algorithm optimizing a constant objective function f(θ) = C for all θ ∈ Rn is equivalent to the approximate density descending arborescence algorithm, when 0 < α < 1 and minf < C.
Proof. The proof of Theorem E.6 relies only on how CMA-MAE updates the archive A and acceptance threshold te. The proof of this theorem follows directly by adapting the proof of Theorem E.6 to CMA-MAEGA.
G DERIVATION OF THE CONVERSION FORMULA FOR THE ARCHIVE LEARNING RATE
In this section, we derive the archive learning rate conversion formula α2 = 1− (1−α1)r mentioned in Section 7 of the main paper, where r is the ratio between archive cell counts, and α1 and α2 are archive learning rates for two archives A1 and A2.
Given an archive learning rate α1 for A1, we want to derive an equivalent archive learning rate α2 for A2 that results in robust performance when CMA-MAE is run with either A1 or A2. A principled way to derive a conversion formula for α2 is to look for an invariance property that affects the performance of CMA-MAE and that holds when CMA-MAE generates solutions in archives A1 and A2.
Since CMA-MAE ranks solutions by f − fA, we wish for fA to increase at the same rate in the two archives. Since fA(θ) = te, where te is the cell that a solution θ maps to, we select the average value of the acceptance thresholds te over all cells in each archive as our invariant property.
We assume an arbitrary sequence of N solution additions θ1,θ2, ...,θN , evenly dispersed across the archive cells. We then specify te as a function that maps k cell additions to a value te in archive cell e.3 Equation 12 then defines the average value of te across the archive after N additions to an archive A with M cells.
1
M M∑ i=1 te ( N M ) (12)
Then, equation 13 defines the invariance we want to guarantee between archives A1 and A2. 3Here we abuse notation and view te as a function instead of threshold for simplicity and to highlight the connection to the threshold value te.
1 M1 M1∑ i=1 te ( | 1. What is the focus and contribution of the paper regarding CMA-ME?
2. What are the strengths of the proposed approach, particularly in terms of its simplicity?
3. What are the weaknesses of the paper, especially regarding the choice of test problems and the significance of the improvement?
4. Do you have any concerns about the relevance and applicability of the proposed method in the QD community?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper aims at improving CMA-ME, a quality-diversity (QD) algorithm. The authors focus on the way that the CMA-ME ranks candidate solutions. The authors hypothesize that the rapid change of the fitness threshold value, denoted by t_e, is the cause of three limitation highlighted in the QD community: prematurely abandoning the objective, struggling to explore flat objectives, and having poor performance for low-resolution archives. The proposed approach changes only the way to update the threshold: introducing a damping factor.
The proposed approach, CMA-MAE, is compared with CMA-ME, MAP-Elites (line) and MAP-Elites on 5 test problems. Higher QD-score and coverage are observed for all test problems.
Strengths And Weaknesses
Strength
A minimal change to the existing approach, CMA-ME, has been proposed, but a statistically significant improvement has been observed in the benchmark problems.
Weaknesses
Technical contribution: A simple approach is nice. However, at the same time, the proposal is incremental and its difference is minimal.
Test problem choice: It has not been discussed why these 5 test problems are selected. Because the selection of test problems is a very important part of the experimental design, this must be carefully explained. In this work, why are these 5 test problems are sufficient to reveal the goodness and limitation of the proposed approaches?
Discussion on the significance: I understand that the results are statistically significant. However, because I am not very familiar with QD community, it is hard to see whether this improvement is meaningful. A discussion on how meaningful these improvements are would improve the readers' understanding, especially for those who are not familiar with QD, which I guess the most audiences in this conference are.
Significance of this topic: Because the test problems are selected from the existing work where differentiability is assumed, a possible application of the proposed approach is not clear.
Clarity, Quality, Novelty And Reproducibility
The paper is easy to follow. Because of the proposal is incremental and the change is minimal, its novelty is limited. The experimental details are provided. |
ICLR | Title
Covariance Matrix Adaptation MAP-Annealing
Abstract
Single-objective optimization algorithms search for the single highest-quality solution with respect to an objective. Quality diversity (QD) algorithms, such as Covariance Matrix Adaptation MAP-Elites (CMA-ME), search for a collection of solutions that are both high-quality with respect to an objective and diverse with respect to specified measure functions. However, CMA-ME suffers from three major limitations highlighted by the QD community: prematurely abandoning the objective in favor of exploration, struggling to explore flat objectives, and having poor performance for low-resolution archives. We propose a new quality diversity algorithm, Covariance Matrix Adaptation MAP-Annealing (CMA-MAE), that addresses all three limitations. We provide theoretical justifications for the new algorithm with respect to each limitation. Our theory informs our experiments, which support the theory and show that CMA-MAE achieves state-of-the-art performance.
1 INTRODUCTION
Consider an example problem of searching for celebrity faces in the latent space of a generative model. As a single-objective optimization problem, we specify an objective f that targets a celebrity such as Tom Cruise. A single-objective optimizer, such as CMA-ES (Hansen, 2016), will converge to a single solution of high objective value, an image that looks like Tom Cruise as much as possible.
However, this objective has ambiguity. How old was Tom Cruise in the photo? Did we want the person in the image to have short or long hair? By instead framing the problem as a quality diversity optimization problem, we additionally specify a measure function m1 that quantifies age and a measure function m2 that quantifies hair length. A quality diversity algorithm (Pugh et al., 2015; Chatzilygeroudis et al., 2021), such as CMA-ME (Fontaine et al., 2020), can then optimize for a collection of images that are diverse with respect to age and hair length, but all look like Tom Cruise.
While previous work (Fontaine et al., 2020; 2021a;b; Earle et al., 2021) has shown that CMA-ME solves such QD problems efficiently, three important limitations of the algorithm have been discovered. First, on difficult to optimize objectives, variants of CMA-ME will abandon the objective too soon (Tjanaka et al., 2022), and instead favor exploring the measure space, the vector space defined by the measure function outputs. Second, the CMA-ME algorithm struggles to explore flat objective functions (Paolo et al., 2021). Third, CMA-ME works well on high-resolution archives, but struggles to explore low-resolution archives (Cully, 2021; Fontaine & Nikolaidis, 2021a). We note that the chosen archive resolution affects the performance of all current QD algorithms.
We propose a new algorithm, CMA-MAE, that addresses these three limitations.
To address the first limitation, we derive an algorithm that smoothly blends between CMA-ES and CMA-ME. First, consider how CMA-ES and CMA-ME differ. At each step CMA-ES’s objective ranking maximizes the objective function f by approximating the natural gradient of f at the current solution point (Akimoto et al., 2010). In contrast, CMA-ME’s improvement ranking moves in the direction of the natural gradient of f − fA at the current solution point, where fA is a discount function equal to the objective of the best solution so far that has the same measure values as the current solution point. The function f − fA quantifies the gap between a candidate solution and the best solution so far at the candidate solution’s position in measure space.
Our key insight is to anneal the function fA by a learning rate α. We observe that when α = 0, then our discount function fA never increases and our algorithm behaves like CMA-ES. However, when
α = 1, then our discount function always maintains the best solution for each region in measure space and our algorithm behaves like CMA-ME. For 0 < α < 1, CMA-MAE smoothly blends between the two algorithms’ behavior, allowing for an algorithm that spends more time on the optimization of f before transitioning to exploration. Figure 1 is an illustrative example of varying the learning rate α.
Our proposed annealing method naturally addresses the flat objective limitation. Observe that both CMA-ES and CMA-ME struggle on flat objectives f as the natural gradient becomes 0 in this case and each algorithm will restart. However, we show that, when CMA-MAE optimizes f − fA for 0 < α < 1, the algorithm becomes a descent method on the density histogram defined by the archive.
Finally, CMA-ME’s poor performance on low resolution archives is likely caused by the nonstationary objective f − fA changing too quickly for the adaptation mechanism to keep up. Our archive learning rate α controls how quickly f − fA changes. We derive a conversion formula for α that allows us to derive equivalent α for different archive resolutions. Our conversion formula guarantees that CMA-MAE is the first QD algorithm invariant to archive resolution.
Overall, our work shows how a simple algorithmic change to CMA-ME addresses all three major limitations affecting CMA-ME’s performance and robustness. Our theoretical findings justify the aforementioned properties and inform our experiments, which show that CMA-MAE outperforms state-of-the-art QD algorithms and maintains robust performance across different archive resolutions.
2 PROBLEM DEFINITION
Quality Diversity. We adopt the quality diversity (QD) problem definition from Fontaine & Nikolaidis (2021a). A QD problem consists of an objective f : Rn → R that maps n-dimensional solution parameters to a scalar value denoting the quality of the solution and k measures mi : Rn → R or, as a vector function, m : Rn → Rk that quantify behavior or attributes of each solution1. The range of m forms a measure space S = m(Rn). The QD objective is to find a set of solutions θ ∈ Rn, such that m(θ) = s for each s in S and f(θ) is maximized.
The measure space S is continuous, but solving algorithms need to produce a finite collection of solutions. Therefore, QD algorithms in the MAP-Elites (Mouret & Clune, 2015; Cully et al., 2015) family relax the QD objective by discretizing the space S. Given T as the tessellation of S into M cells, the QD objective becomes to find a solution θi for each of the i ∈ {1, . . . ,M} cells, such that each θi maps to the cell corresponding to m(θi) in the tesselation T . The QD objective then becomes maximizing the objective value f(θi) of all cells:
max M∑ i=1 f(θi) (1)
The differentiable quality diversity (DQD) problem (Fontaine & Nikolaidis, 2021a) is a special case of the QD problem where both the objective f and measures mi are first-order differentiable.
1In agent-based settings, such as reinforcement learning, the measure functions are sometimes called behavior functions and the outputs of each measure function are called behavioral characteristics or behavior descriptors.
CMA-ES CMA-MECMA-MAE
3 PRELIMINARIES
We present several QD algorithms that solve derivative-free QD problems to provide context for our proposed CMA-MAE algorithm. Appendix D contains information about the DQD algorithm CMA-MEGA, which solves problems where exact gradient information is available.
MAP-Elites and MAP-Elites (line). The MAP-Elites QD algorithm produces an archive of solutions, where each cell in the archive corresponds to the provided tesselation T in the QD problem definition. The algorithm initializes the archive by sampling solutions from the solution space Rn from a fixed distribution. After initialization, MAP-Elites produces new solutions by selecting occupied cells uniformly at random and perturbing them with isotropic Gaussian noise: θ′ = θi + σN (0, I). For each new candidate solution θ′, the algorithm computes an objective f(θ′) and measures m(θ′). MAP-Elites places θ′ into the archive if the cell corresponding to m(θ′) is empty or θ′ obtains a better objective value f(θ′) than the current occupant. The MAP-Elites algorithm results in an archive of solutions that are diverse with respect to the measure function m, but also high quality with respect to the objective f . Vassiliades & Mouret (2018) proposed the MAP-Elites (line) algorithm by augmenting the isotropic Gaussian perturbation with a linear interpolation between two solutions θi and θj : θ′ = θi + σ1N (0, I) + σ2N (0, 1)(θi − θj). CMA-ME. Covariance Matrix Adaptation MAP-Elites (CMA-ME) (Fontaine et al., 2020) combines the archiving mechanisms of MAP-Elites with the adaptation mechanisms of CMA-ES Hansen (2016). Instead of perturbing archive solutions with Gaussian noise, CMA-ME maintains a multivariate Gaussian of search directionsN (0,Σ) and a search point θ ∈ Rn. The algorithm updates the archive by sampling λ solutions around the current search point θi ∼ N (θ,Σ). After updating the archive, CMA-ME ranks solutions via a two stage ranking. Solutions that discover a new cell are ranked by the objective ∆i = f(θi), and solutions that map to an occupied cell e are ranked by the improvement over the incumbent solution θe in that cell: ∆i = f(θi)− f(θe). CMA-ME prioritizes exploration by ranking all solutions that discover a new cell before all solutions that improve upon an existing cell. Finally, CMA-ME moves θ towards the largest improvement in the archive, according to the CMA-ES update rules. Fontaine & Nikolaidis (2021a) showed that the improvement ranking of CMA-ME approximates a natural gradient of a modified QD objective (see Eq. 1).
4 PROPOSED ALGORITHMS
We present the CMA-MAE algorithm. While we focus on CMA-MAE, the same augmentations apply to CMA-MEGA to form the novel CMA-MAEGA algorithm (see Appendix D).
CMA-MAE. CMA-MAE is an algorithm that adjusts the rate the objective f − fA changes. First, consider at a high level how CMA-ME explores the measure space and discovers high quality solutions. The CMA-ME algorithm maintains a solution point θ and an archive A with previously discovered solutions. When CMA-ME samples a new solution θ′, the algorithm computes the solution’s objective value f(θ′) and maps the solution to a cell e in the archive based on the measure m(θ′). CMA-ME then computes the improvement of the objective value f(θ′) of the new solution, over a discount function fA : Rn → R. In CMA-ME, we define fA(θ′) by computing the cell e in
the archive corresponding to m(θ′) and letting fA(θ′) = f(θe), where θe is the incumbent solution of cell e. The algorithm ranks candidate solutions by improvement f(θ′)− fA(θ′) = f(θ′)− f(θe) and moves the search in the direction of higher ranked solutions.
Assume that CMA-ME samples a new solution θ′ with a high objective value of f(θ′) = 99. If the current occupant θe of the corresponding cell has a low objective value of f(θe) = 0.3, then the improvement in the archive ∆ = f(θ′) − f(θe) = 98.7 is high and the algorithm will move the search point θ towards θ′. Now, assume that in the next iteration the algorithm discovers a new solution θ′′ with objective value f(θ′′) = 100 that maps to the same cell as θ′. The improvement then is ∆ = f(θ′′)− f(θ′) = 1 as θ′ replaced θe in the archive in the previous iteration. CMA-ME would likely move θ away from θ′′ as the solution resulted in low improvement. In contrast, CMA-ES would move towards θ′′ as it ranks only by the objective f , ignoring previously discovered solutions with similar measure values.
In the above example, CMA-ME moves away from high performing solutions in order to maximize how the archive changes. However, in domains with hard-to-optimize objective functions, it is beneficial to perform more optimization steps in high-performing regions (Tjanaka et al., 2022).
Like CMA-ME, CMA-MAE maintains a discount function fA(θ′) and ranks solutions by improvement f(θ′)− fA(θ′). However, instead of setting fA(θ′) equal to f(θe), we set fA(θ′) = te, where te is an acceptance threshold maintained for each cell in the archive A. When adding a candidate solution to the archive, we control the rate that te changes by the archive learning rate α as follows: te ← (1− α)te + αf(θ′). The archive learning rate α in CMA-MAE allows us to control how quickly we leave a highperforming region of measure space. For example, consider discovering solutions in the same cell with objective value 100 in 5 consecutive iterations. The improvement values computed by CMA-ME would be 100, 0, 0, 0, 0, thus CMA-ME would move rapidly away from this cell. The improvement values computed by CMA-MAE with α = 0.5 would diminish smoothly as follows: 100, 50, 25, 12.5, 6.25, enabling further exploitation of the high-performing region.
Next, we walk through the CMA-MAE algorithm step-by-step. Algorithm 1 shows the pseudo-code for CMA-MAE with the differences from CMA-ME highlighted in yellow. First, on line 2 we initialize the acceptance threshold to minf . In each iteration we sample λ solutions around the current search point θ (line 5). For each candidate solution θi, we evaluate the solution and compute the objective value f(θi) and measure values m(θi) (line 6). Next, we compute the cell e in the archive that corresponds to the measure values and the improvement ∆i over the current threshold te (lines 7-8). If the objective crosses the acceptance threshold te, we replace the incumbent θe in the archive and increase the acceptance threshold te (lines 9-11). Next, we rank all candidate solutions θi by their improvement ∆i. Finally, we step our search point θ and adapt our covariance matrix Σ towards the direction of largest improvement (lines 14-15) according to CMA-ES’s update rules (Hansen, 2016).
CMA-MAEGA. We note that our augmentations to the CMA-ME algorithm only affects how we replace solutions in the archive and how we calculate ∆i. CMA-ME and CMA-MEGA replace solutions and calculate ∆i identically, so we apply the same augmentations to CMA-MEGA to form a new DQD algorithm, CMA-MAEGA, in Appendix D.
5 THEORETICAL PROPERTIES OF CMA-MAE
We provide insights about the behavior of CMA-MAE for different α values. We include all proofs in Appendix E. CMA-MAEGA has similar theoretical properties discussed in Appendix F.
Theorem 5.1. The CMA-ES algorithm is equivalent to CMA-MAE when α = 0, if CMA-ES restarts from an archive solution.
The next theorem states that CMA-ME is equivalent to CMA-MAE when α = 1 with the following caveats: First, we assume that CMA-ME restarts only by the CMA-ES restart rules, rather than the additional “no improvement” restart rule in prior work (Fontaine et al., 2020). Second, we assume that both CMA-ME and CMA-MAE leverage µ selection (Hansen, 2016) rather than filtering selection (Fontaine et al., 2020).
Algorithm 1 Covariance Matrix Adaptation MAP-Annealing (CMA-MAE) CMA-MAE (evaluate,θ0, N, λ, σ, minf , α)
input : An evaluation function evaluate that computes the objective and measures, an initial solution θ0, a desired number of iterations N , a branching population size λ, an initial step size σ, a minimal acceptable solution quality minf , and an archive learning rate α.
result :Generate Nλ solutions storing elites in an archive A. 1 Initialize solution parameters θ to θ0, CMA-ES parameters Σ = σI and p, where we let p be the CMA-ES internal parameters. 2 Initialize the archive A and the acceptance threshold te with minf for each cell e. 3 for iter ← 1 to N do 4 for i← 1 to λ do 5 θi ∼ N (θ,Σ) 6 f,m← evaluate(θi) 7 e← calculate_cell(A,m) 8 ∆i ← f − te 9 if f > te then
10 Replace the current occupant in cell e of the archive A with θi 11 te ← (1− α)te + αf 12 end 13 end 14 rank θi by ∆i 15 Adapt CMA-ES parameters θ,Σ,p based on improvement ranking ∆i 16 if CMA-ES converges then 17 Restart CMA-ES with Σ = σI . 18 Set θ to a randomly selected existing cell θi from the archive 19 end 20 end
Theorem 5.2. The CMA-ME algorithm is equivalent to CMA-MAE when α = 1 and minf is an arbitrarily large negative number.
We next provide theoretical insights on how the discount function fA smoothly increases from a constant function minf to the discount function used by CMA-ME, as α increases from 0 to 1. We focus on the special case of a fixed sequence of candidate solutions. Theorem 5.3. Let αi and αj be two archive learning rates for archives Ai and Aj such that 0 ≤ αi < αj ≤ 1. For two runs of CMA-MAE that generate the same sequence of m candidate solutions {S} = θ1,θ2, ...,θm, it follows that fAi(θ) ≤ fAj (θ) for all θ ∈ Rn.
Finally, we wish to provide insights about the exploration properties of CMA-MAE for an archive learning rate α between 0 and 1, when the objective f is constant. Consider an approximate density descent algorithm that is identical to CMA-ME, but differs by how solutions are ranked. Specifically, we assume that this algorithm maintains a density histogram of the occupancy counts oe for each cell e, with oe representing the number of times a solution was generated in that cell. This algorithm descends the density histogram by ranking solutions based on the occupancy count of the cell that the solution maps to, where solutions that discover less frequently visited cells are ranked higher. Theorem 5.4. The CMA-MAE algorithm optimizing a constant objective function f(θ) = C for all θ ∈ Rn is equivalent to the approximate density descent algorithm, when 0 < α < 1 and minf < C.
While Theorem 5.4 assumes a constant objective f , we conjecture that the theorem holds true generally when threshold te in each cell e approaches the local optimum within the cell boundaries.
6 EXPERIMENTS
We compare the performance of CMA-MAE with the state-of-the-art QD algorithms MAP-Elites, MAP-Elites (line), and CMA-ME, using existing Pyribs (Tjanaka et al., 2021) QD library implementations. We set α = 0.01 for CMA-MAE and include additional experiments for varying α
in section 7. Because annealing methods replace solutions based on the threshold, we retain the best solution in each cell for comparison purposes. We include additional comparisons between CMA-MEGA and CMA-MAEGA – the gradient-based counterpart of CMA-MAE – in Appendix K.
We select the benchmark domains from Fontaine & Nikolaidis (2021a): linear projection (Fontaine et al., 2020), arm repertoire (Cully & Demiris, 2017), and latent space illumination (Fontaine et al., 2021b). To evaluate the good exploration properties of CMA-MAE on flat objectives, we introduce a variant of the linear projection domain to include a “plateau” objective function that is constant everywhere for solutions within a fixed range and has a quadratic penalty for solutions outside the range. We describe the domains in detail in Appendix B.
6.1 EXPERIMENT DESIGN
Independent Variables. We follow a between-groups design with two independent variables: the algorithm and the domain.
Dependent Variables. We use the sum of f values of all cells in the archive, defined as the QD-score Pugh et al. (2015), as a metric for the quality and diversity of solutions. Following Fontaine & Nikolaidis (2021a), we normalize the QD-score metric by the archive size (the total number of cells from the tesselation of measure space) to make the metric invariant to archive resolution. We additionally compute the coverage, defined as the number of occupied cells in the archive divided by the total number of cells.
6.2 ANALYSIS
Table 1 shows the QD-score and coverage values for each algorithm and domain, averaged over 20 trials for the linear projection (LP) and arm repertoire domains and over 5 trials for the LSI domain. Fig. 3 shows the QD-score values for increasing number of iterations and example archives for CMA-MAE and CMA-ME, with 95% confidence intervals.
We conducted a two-way ANOVA to examine the effect of the algorithm and domain (LP (sphere), LP (Rastrigin), LP (plateau), arm repertoire, and LSI) on the QD-score. There was a significant interaction between the search algorithm and the domain (F (12, 320) = 1958.34, p < 0.001). Simple main effects analysis with Bonferroni corrections showed that CMA-MAE outperformed all baselines in all benchmark domains.
For the arm repertoire domain, we can compute the optimal archive coverage by testing whether each cell overlaps with a circle of radius equal to the maximum arm length (see Appendix B). We observe that CMA-MAE approaches the computed optimal coverage 80.24% for a resolution of 100× 100 and outperforms CMA-MEGA (Fontaine & Nikolaidis, 2021a) (see Appendix K).
These results show that the archive learning rate α is particularly beneficial for CMA-MAE. We observe that CMA-MAE initially explores regions of the measure space that have high-objective values. Once the archive becomes saturated, CMA-MAE reduces to approximate density descent, as we prove in Theorem 5.4 for flat objectives. On the other hand, CMA-ME does not receive any exploration signal when the objective landscape becomes flat, resulting in poor performance.
While our results show improved quantitative results on the LSI domain, Appendix I discusses how to improve the visual quality by leveraging techniques from the generative art community. Fig. 4 shows an example collage generated by adopting improvements for guiding StyleGAN with CLIP.
7 ON THE ROBUSTNESS OF CMA-MAE
Next, we present two studies that evaluate the robustness of CMA-MAE across two hyperparameters that may affect algorithm performance: the archive learning rate α and the archive resolution.
Archive Learning Rate. We examine the effect of different archive learning rates on the performance of CMA-MAE in the linear projection and arm repertoire domains. We vary the learning rate from 0 to 1 on an exponential scale, while keeping the resolution constant in each domain.
Table 2 shows that running CMA-MAE with the different 0 < α < 1 results in relatively similar performance, showing that CMA-MAE is fairly robust to α values. On the other hand, if α = 0 or α = 1 the performance drops drastically. Setting α = 1 results in very similar performance with CMA-ME, which supports our insight from Theorem 5.2.
Archive Resolution. As noted by Cully (2021) and Fontaine & Nikolaidis (2021a), quality diversity algorithms in the MAP-Elites family sometimes perform differently when run with different archive resolutions. For example, in the linear projection domain presented in Fontaine et al. (2020), CMA-ME outperformed MAP-Elites and MAP-Elites (line) for archives of resolution 500 × 500, while in this paper we observe that it performs worse for resolution 100 × 100. In this study, we investigate how CMA-MAE performs at different archive resolutions.
First, we note that the optimal archive learning rate α is dependent on the resolution of the archive. Consider as an example a sequence of solution additions to two archives A1 and A2 of resolution 100× 100 and 200× 200, respectively. A2 subdivides each cell in A1 into four cells, thus archive A2’s thresholds te should increase at a four times faster rate than A1. To account for this difference, we compute α2 for A2 via a conversion formula α2 = 1− (1− α1)r (see derivation in Appendix G), where r is the ratio of cell counts between archives A1 and A2. We initialize α1 = 0.01 for A1. In the above example, α2 = 1− (1− 0.01)4 = 0.0394.
Fig. 5 shows the QD-score of CMA-MAE with the resolution-dependent archive learning rate and the baselines for each benchmark domain. CMA-ME performs worse as the resolution decreases because the archive changes quickly at small resolutions, affecting CMA-ME’s adaptation mechanism. On the contrary, MAP-Elites and MAP-Elites (line) perform worse as the resolution increases due to having more elites to perturb. CMA-MAE’s performance is invariant to the resolution of the archive.
8 RELATED WORK
Quality Diversity Optimization. The predecessor to quality diversity optimization, simply called diversity optimization, originated with the Novelty Search algorithm (Lehman & Stanley, 2011a), which searches for a collection of solutions that are diverse in measure space. Later work introduced the Novelty Search with Local Competition (NSLC) (Lehman & Stanley, 2011b) and MAP-Elites (Cully et al., 2015; Mouret & Clune, 2015) algorithms, which combined single-objective optimization with diversity optimization and were the first QD algorithms. Since then, several QD algorithms have been proposed, based on a variety of single-objective optimization methods, such as Bayesian optimization (Kent & Branke, 2020), evolution strategies (Conti et al., 2018; Colas et al., 2020; Fontaine et al., 2020), differential evolution (Choi & Togelius, 2021), and gradient ascent (Fontaine & Nikolaidis, 2021a). Several works have improved selection mechanisms (Sfikas et al., 2021; Cully & Demiris, 2017), archives (Fontaine et al., 2019; Vassiliades et al., 2018; Smith et al., 2016), and perturbation operators (Vassiliades & Mouret, 2018; Nordmoen et al., 2018).
QD with Gradient Information. Several works combine gradient information with quality diversity optimization in ways that do not leverage the objective and measure gradients directly. For example, in model-based quality diversity optimization (Gaier et al., 2018; Hagg et al., 2020; Cazenille et al., 2019; Keller et al., 2020; Lim et al., 2021; Zhang et al., 2021; Gaier et al., 2020), Rakicevic et al. (2021) trains an autoencoder on the archive of solutions and leverages the Jacobian of the decoder network to compute the covariance of the Gaussian perturbation. In quality diversity reinforcement learning (QD-RL), several works (Parker-Holder et al., 2020; Pierrot et al., 2020; Nilsson & Cully, 2021; Tjanaka et al., 2022) approximate a reward gradient or diversity gradient via a critic network, action space noise, or evolution strategies and incorporate those gradients into a QD-RL algorithm.
Acceptance Thresholds. Our proposed archive learning rate α was loosely inspired by simulated annealing methods (Bertsimas & Tsitsiklis, 1993) that maintain an acceptance threshold that gradually becomes more selective as the algorithm progresses. The notion of an acceptance threshold is also closely related to minimal criterion methods in evolutionary computation (Lehman & Stanley, 2010; Brant & Stanley, 2017; 2020; Stanley et al., 2016). Our work differs by both 1) maintaining an acceptance threshold per archive cell rather than a global threshold and 2) annealing the threshold.
9 LIMITATIONS AND FUTURE WORK
Our approach introduced two hyperparameters, α and minf , to control the rate that f − fA changes. We observed that an α set strictly between 0 and 1 yields theoretical exploration improvements and that CMA-MAE is robust with respect to the exact choice of α. We additionally derived a conversion formula that converts an α1 for a specific archive resolution to an equivalent α2 for a different resolution. However, the conversion formula still requires practitioners to specify a good initial value of α1. Future work will explore ways to automatically initialize α, similar to how CMA-ES automatically assigns internal parameters (Hansen, 2016).
Quality diversity optimization is a rapidly growing branch of stochastic optimization with applications in generative design (Hagg et al., 2021; Gaier et al., 2020; 2018), automatic scenario generation in robotics (Fontaine & Nikolaidis, 2021c; Fontaine et al., 2021a; Fontaine & Nikolaidis, 2021b), reinforcement learning (Parker-Holder et al., 2020; Pierrot et al., 2020; Nilsson & Cully, 2021; Tjanaka et al., 2022), damage recovery in robotics (Cully et al., 2015), and procedural content generation (Gravina et al., 2019; Fontaine et al., 2021b; Zhang et al., 2021; Earle et al., 2021; Khalifa et al., 2018; Steckel & Schrum, 2021; Schrum et al., 2020; Sarkar & Cooper, 2021; Bhatt et al., 2022). Our paper introduces a new quality diversity algorithm, CMA-MAE. Our theoretical findings inform our experiments, which show that CMA-MAE addresses three major limitations affecting the CMA-ME algorithm, leading to state-of-the-art performance.
10 ETHICS STATEMENT
By controlling the trade-off between exploration and exploitation in QD algorithms, we aim towards improving their performance and robustness, thus making these algorithms easier to apply in a wide range of domains and applications. One promising application is synthetically extracting datasets from generative models to train machine learning algorithms Jahanian et al. (2021); Besnier et al. (2020). This can raise ethical considerations because generative models can reproduce and exacerbate existing biases in the datasets that they were trained on (Jain et al., 2020; Menon et al., 2020). On the other hand, quality diversity algorithms with carefully selected measure functions can target diversity with desired attributes, thus we hypothesize that they can be effective in generating balanced datasets. Furthermore, by attempting to find diverse solutions, QD algorithms are a step towards open-endedness in AI Stanley et al. (2017) and will often result in unexpected and often surprising emergent behaviors (Lehman et al., 2020). We recognize that this presents several challenges in predictability and monitoring of AI systems (Hendrycks et al., 2021), and we highlight the importance of future work on balancing the tradeoff between open-endedness and control (Ecoffet et al., 2020).
11 REPRODUCIBILITY STATEMENT
In the supplemental material we provide complete source code for all algorithms and experiments, as well as the Conda environments for installing project dependencies. The “README.md” document provides complete instructions both setup and execution of all experiments. In Appendix A we provide all hyperparameters. In Appendix B we provide domain-specific details for replicating all experimental domains. In Appendix C we provide information about the computational resources and hardware we used to run our experiments. In Appendix D we provide the pseudocode for the CMA-MAEGA algorithm, the DQD counterpart of CMA-MAE. In Appendix E we provide the proofs of all theorems in the paper. In Appendix F we provide the theoretical properties of CMA-MAEGA. In Appendix G we provide the derivation of the conversion formula for the archive learning rate. In Appendix H we provide a batch threshold update rule that is invariant to the order that the solutions are processes within a batch update. In Appendix I we discuss the implementation details for additional experiments that improve the quality of the generated images in the latent space illumination domain. In Appendix K we present all metrics with standard errors for each algorithm and domain.
APPENDIX
A HYPERPARAMETER SELECTION
For all domains we mirror the hyperparameter selection of Fontaine & Nikolaidis (2021a). For CMA-MAE and CMA-MAEGA, we duplicate the hyperparameter selections of CMA-ME and CMAMEGA, respectively. Following Fontaine et al. (2020), we run all algorithms with 15 emitters on the linear projection and arm repertoire domains. In the latent space illumination domain, we run experiments with only one emitter, due to the computational expense of the domain. Emitters are independent CMA-ES instances that run in parallel with a shared archive. For each algorithm, we select a batch size λ = 36 following Fontaine & Nikolaidis (2021a). For MAP-Elites and MAP-Elites (line), we initialize the archive with 100 random solutions, sampled from the distribution N (0, I). These initial solutions do not count in the evaluation budget for MAP-Elites and MAP-Elites (line). For algorithms in the CMA-ME family (CMA-ME, CMA-MAE, CMA-MEGA, and CMA-MAEGA), we initialize θ0 = 0 for every domain.
In our experiments we want to directly compare the ranking mechanisms of CMA-ME and CMA-MAE. However, CMA-ME is typically run with a “no improvement” restart rule, where the algorithm will restart if no solution changes the archive. Due to CMA-MAE’s annealed acceptance threshold te, a “no improvement” restart rule would cause CMA-ME and CMA-MAE to restart at different rates, confounding the effects of restarts and rankings. Filter selection also has a similar confounding effect as solutions are selected if they change the archive. For these reasons, in the main paper we run CMA-ME with a basic restart rule (CMA-ES style restarts only (Hansen, 2016)) and µ selection (Hansen, 2016) (selecting the top half of the ranking). In Appendix Section K, we run an extra CMA-ME with filter selection and the “no improvement” restart rule, which we denote CMA-ME*. We include, as an additional baseline, a configuration of CMA-ME that mixes emitters that optimize only for the objective with emitters that optimize for improvement, a configuration first studied by Cully (2021). We refer to this configuration as CMA-ME (imp, opt).
In the latent space illumination domain, due to the computational expense of the domain, we compare directly against the results from Fontaine & Nikolaidis (2021a), where we obtained the data (MIT license) with consent from the authors. For CMA-MAE and CMA-MAEGA we include the “no improvement” restart rule to match CMA-ME and CMA-MEGA as closely as possible. For this domain, we take gradient steps with the Adam optimizer (Kingma & Ba, 2015), following the recommendation of Fontaine & Nikolaidis (2021a). However, we run CMA-MAE with µ selection, since we found that small values of the archive learning rate α makes filter selection worse.
In Appendix I, we describe a second LSI experiment on StyleGAN2 (Karras et al., 2020b) configured by insights from the generative art community that improve the quality of single-objective latent space optimization. For this domain, we configure CMA-MAEGA and CMA-MEGA to use a “basic” restart rule because the latent space L2 regularization keeps solutions in the StyleGAN2 training distribution. For this experiment, the latent space is large (n = 9216), so we exclude CMA-ME and CMA-MAE due to the size of the covariance matrix (9216× 9216) and the prohibitive cost for computing an eigendecomposition of a large covariance matrix.
Linear Projection (sphere, Rastrigin, plateau).
• MAP-Elites: σ = 0.5
• MAP-Elites (line): σ1 = 0.5, σ2 = 0.2
• CMA-ME: σ = 0.5, µ selection, basic restart rule
• CMA-ME*: σ = 0.5, filter selection, no improvement restart rule
• CMA-ME (imp, opt): σ = 0.5, µ selection, basic restart rule, 7 optimizing and 8 improvement emitters
• CMA-MAE: σ = 0.5, α = 0.01, minf = 0, µ selection, basic restart rule
• CMA-MEGA: σg = 10.0, η = 1.0, basic restart rule, gradient ascent optimizer
• CMA-MAEGA: σg = 10.0, η = 1.0, α = 0.01, minf = 0, basic restart rule, gradient ascent optimizer
Arm Repertoire.
• MAP-Elites: σ = 0.1 • MAP-Elites (line): σ1 = 0.1, σ2 = 0.2 • CMA-ME: σ = 0.2, µ selection, basic restart rule • CMA-ME*: σ = 0.2, filter selection, no improvement restart rule • CMA-ME (imp, opt): σ = 0.2, µ selection, basic restart rule,
7 optimizing and 8 improvement emitters • CMA-MAE: σ = 0.2, α = 0.01, minf = 0, µ selection, basic restart rule • CMA-MEGA: σg = 0.05, η = 1.0, basic restart rule, gradient ascent optimizer • CMA-MAEGA: σg = 0.05, η = 1.0, α = 0.01, minf = 0, basic restart rule, gradient
ascent optimizer
Latent Space Illumination. (StyleGAN)
• MAP-Elites: σ = 0.2 • MAP-Elites (line): σ1 = 0.1, σ2 = 0.2 • CMA-ME: σ = 0.02, filter selection, no improvement restart rule • CMA-MAE: σ = 0.02, α = 0.1, minf = 55, µ selection, no improvement restart rule, 50
iteration timeout • CMA-MEGA: σg = 0.002, η = 0.002, Adam optimizer, no improvement restart rule • CMA-MAEGA: σg = 0.002, η = 0.002, α = 0.1, minf = 55, Adam optimizer, no
improvement restart rule, 50 iteration timeout
Latent Space Illumination. (StyleGAN 2)
• MAP-Elites: σ = 0.1 • MAP-Elites (line): σ1 = 0.1, σ2 = 0.2 • CMA-MEGA: σg = 0.01, η = 0.05, Adam optimizer, basic restart rule • CMA-MAEGA: σg = 0.01, η = 0.05, α = 0.02, minf = 0, Adam optimizer, basic restart
rule
Adam Hyperparameters. We use the same hyperparameters as previous work Perez (2021); Fontaine & Nikolaidis (2021a).
• β1 = 0.9 • β2 = 0.999
Archives. For the linear projection and arm repertoire domains, we initialize an archive of 100× 100 cells for all algorithms. For latent space illumination we initialize an archive of 200× 200 cells for all algorithms, following Fontaine & Nikolaidis (2021a).
B DOMAIN DETAILS
To experimentally evaluate both CMA-MAE and CMA-MAEGA, we select domains from Fontaine & Nikolaidis (2021a): linear projection (Fontaine et al., 2020), arm repertoire (Cully & Demiris, 2017), and latent space illumination (Fontaine et al., 2021b). While many quality diversity optimization domains exist, we select these because gradients of f and m are easy to compute analytically and allow us to evaluate DQD algorithms in addition to derivative-free QD algorithms. To evaluate the good exploration properties of CMA-MAE on flat objectives, we introduce a variant of the linear projection domain to include a “plateau” objective function.
Linear Projection. The linear projection domain (Fontaine et al., 2020) was introduced to benchmark distortions caused by mapping a high-dimensional search space to a low-dimensional measure space.
The domain forms a 2D measure space by a linear projection that bounds the contribution of each component θi of the projection to the range [−5.12, 5.12]. QD algorithms must adapt the step size of each component θi to slowly approach the extremes of the measure space, with a harsh penalty for components outside [−5.12, 5.12]. As QD domains must provide an objective, the linear projection domain included two objectives from the black-box optimization benchmarks (Hansen et al., 2016; 2010): sphere and Rastrigin. Following Fontaine et al. (2020), we run all experiments for n = 100.
Formally, the measure functions are defined as a linear projection, a weighted sum of the components θi ∈ R of a solution θ ∈ Rn. The first measure function m1 is a weighted sum of the first half of the solution θ, and the second measure function m2 is a weighted sum of the second half of the solution θ (see Eq. 3). To ensure that all solutions mapped to measure space occupy a finite volume, the contribution in measure space of each component θi is bounded to the range [−5.12, 5.12] via a clip function (see Eq. 2) that applies a harsh penalty for solution components θi stepping outside the range [−5.12, 5.12].
clip(θi) = { θi if −5.12 ≤ θi ≤ 5.12 5.12/θi otherwise
(2)
m(θ) = ⌊n2 ⌋∑ i=1 clip(θi), n∑ i=⌊n2 ⌋+1 clip(θi) (3) Fig. 6 visualizes why the linear projection domain is challenging. First, we note that the density of solutions in search space mapped to measure space mostly occupies the region close to 0. To justify why, consider sampling uniformly in the hypercube [−5.12, 5.12]n in search space. We note that each of these points maps to the linear region of the measure functions and each of our measures becomes a sum of random variables. If we divide by n, we normalize by the dimensions of the search space, then the measure functions become an average of random variables. The average of n uniform random variables is the Bates distribution (Johnson et al., 1995), a distribution that narrows in variance as n grows larger. Without the clip function, a QD algorithm could simply increase a single θi to reach any point in the measure space. However, the clip function prevents this by bounding the contribution of each component of θ to the range [−5.12, 5.12]. To reach the extremes of measure space all components θi must converge to the extremums ±5.12. The linear projection domain is challenging to explore due to both the clustering of solutions in a small region of measure space and the heavy measure space penalties applied by the clip function when a component θi leaves the region [−5.12, 5.12]. Next, we describe the linear projection domain’s objective functions visualized in Fig. 7.
The objectives of the linear projection domain satisfy the requirements that a QD domain needs to have an objective and are of lesser importance than the measure function definitions, since the benchmark primarily evaluates exploration capabilities. Fontaine et al. (2020) selected two objectives from the black-box optimization benchmarks competition (Hansen et al., 2016; 2010): sphere and Rastrigin. The sphere function (Eq. 4) is a quadratic function2, while the Rastrigin function (Eq. 5) is a multi-modal function that when smoothed is quadratic. The domain shifts the global optimum to the position θi = 5.12 · 0.4 = 2.048.
fsphere(θ) = n∑ i=1 θ2i (4)
fRastrigin(θ) = 10n+ n∑ i=1 [θ2i − 10 cos(2πθ2i )] (5)
We introduce an additional objective to evaluate the good exploration properties of CMA-MAE on flat objectives. Our “plateau” objective function (Eq. 7) is constant everywhere, but with a quadratic penalty for each component outside the range [−5.12, 5.12]. The penalty acts as a regularizer to encourage algorithms to search in the linear region of measure space.
fplateau(θi) = { 0 if −5.12 ≤ θi ≤ 5.12 (|x| − 5.12)2 otherwise (6)
fplateau(θ) = 1
n n∑ i=1 fplateau(θi) (7)
Arm Repertoire. The arm repertoire domain Cully & Demiris (2017); Vassiliades & Mouret (2018) tasks QD algorithms to find a diverse collection of arm positions for an n-dimensional planar robotic arm with revolute joints. The measures in this domain are the 2D coordinates of the robot’s end-effector and the objective is to minimize the variance of the joint angles.
In Fig. 8, we visualize example arms for n = 5 (5-DOF). The optimal solutions in this domain have 0 variance between all joint angles. The measure functions are bounded to the range [−n, n] as each arm segment has a unit length. The reachable cells form a circle of radius n. Therefore, the optimal archive coverage is approximately πn 2
4n2 ≈ 78.5%. An archive can achieve an upper-bound of this ratio that becomes tighter at higher resolutions. We select n = 100 (100-DOF) arms for the experiments.
Latent Space Illumination. Prior work introduced the latent space illumination problem (Fontaine et al., 2021b), the problem of searching the latent space of a generative model with a quality diversity algorithm. We evaluate on the StyleGAN+CLIP version of this problem (Fontaine & Nikolaidis,
2In derivative-free optimization many of the benchmark functions are named after the shape of the contour lines. In the case of quadratic functions with an identity Hessian matrix, the contour lines form hyperspheres.
2021a), by searching the latent space of StyleGAN (Karras et al., 2019) with a QD algorithm. We form the differentiable objective and measures in this domain by specifying text prompts to the CLIP model (Radford et al., 2021) that can determine the similarity of an image and text. We specify an objective prompt of “A photo of Beyonce”. For measures, we would like to have CLIP quantify abstract concepts like the hair length or age of the person in the photo. However, CLIP can only determine similarity of an image and a text prompt. As surrogates for age and hair length, we specify the measure prompts of “A small child” and “A woman with long blonde hair”. The objective and measure functions guide the QD algorithms towards discovering a collection of photos of Beyoncé with varying age and hair length.
For our additional LSI experiment on StyleGAN2 with setup improvements, see Appendix I.
Transformations of the Objective Function. We highlight two issues that must be addressed by transforming the objective in each domain. First, we note that the problem definition in each of our domains contains an objective f that must be minimized. In contrast, the QD problem definition specifies an objective f that must be maximized. Second, the QD-score metric, which measures the performance of QD algorithms, requires a non-negative objective function. Following prior work (Fontaine et al., 2020; Fontaine & Nikolaidis, 2021a), we transform the objective f via a linear transformation: f ′ = af + b. The linear transformation maps function outputs to the range [0, 100].
In the linear projection domain, we estimate the largest objective value for the sphere and Rastrigin function within the region [−5.12, 5.12] for each solution component θi. We compute f(−5.12,−5.12, ...,−5.12) for each objective as the maximum. The minimum of each function is 0. We calculate the linear transformation as:
f ′(θ) = 100 · f(θ)− fmax fmin − fmax
(8)
For our new plateau objective, all solution points within the region [−5.12, 5.12]n have objective value of 0. For this objective we set fmin = 0 and fmax = 100 and apply the transformation in Eq. 8.
For the arm domain we select fmin = 0 and fmax = 1, and in the LSI domain we select fmin = 0 and fmax = 10. We select these values to match Fontaine & Nikolaidis (2021a).
C IMPLEMENTATION
We replicate the implementation details of prior work (Fontaine & Nikolaidis, 2021a).
Archives. For the linear projection and arm repertoire domains, we initialize an archive of 100× 100 cells for all algorithms. For latent space illumination we initialize an archive of 200× 200 cells for all algorithms, following previous work (Fontaine & Nikolaidis, 2021a).
Metrics. We use the sum of f values of all cells in the archive, defined as the QD-score Pugh et al. (2015), as a metric for the quality and diversity of solutions. Following Fontaine & Nikolaidis (2021a), we normalize the QD-score by the total number of cells, both occupied and unoccupied, to make QD-score invariant to the resolution of the archive. We additionally compute the coverage, defined as the number of occupied cells in the archive divided by the total number of cells.
Computational Resources. We ran all trials of the linear projection and arm repertoire domains on an AMD Ryzen Threadripper 32-core (64 threads) processor. A run of 20 trials in parallel takes about 20 minutes for the linear projection domain and 25 minutes for the arm repertoire domain. For the latent space illumination domain, we accelerate the StyleGAN+CLIP pipeline on a GeForce RTX 3090 Nvidia GPU. One trial for latent space illumination takes approximately 2 hours and 30 minutes for StyleGAN and approximately 3 hours and 30 minutes for StyleGAN2. In all domains, runtime increases when an algorithm obtains better coverage, because we iterate over the archive when QD statistics are calculated.
Software Implementation. We use the open source Pyribs (Tjanaka et al., 2021) library for all algorithms. We implemented the CMA-MAE and CMA-MAEGA algorithms using the same library.
D COVARIANCE MATRIX ADAPTATION MAP-ELITES VIA A GRADIENT
ARBORESCENCE (CMA-MAEGA)
In this section, we provide information of the CMA-MEGA differentiable quality diversity (DQD) algorithm, and we derive CMA-MAE’s DQD counterpart: CMA-MAEGA.
CMA-MEGA. Covariance Matrix Adaptation MAP-Elites via Gradient Arborescence (CMA-MEGA) solves the DQD problem, where the objective f and measures m are first-order differentiable. Like CMA-ME, the algorithm maintains a solution point θ ∈ Rn and a MAP-Elites archive. CMA-MEGA samples new solutions by perturbing the search point θ via the objective and measure gradients. However, the contribution of each gradient is balanced by gradient coefficients c: θi = θ + c0∇f(θ) + ∑k j=1 cj∇mj(θ). These coefficients are sampled from a multivariate Gaussian distribution N(µ,Σ) maintained by the algorithm. After sampling new candidate solutions θi, the solutions are ranked via the improvement ranking from CMA-ME. CMA-MEGA updates N(µ,Σ) via the CMA-ES update rules and the algorithm steps θ also in the direction of largest archive improvement. The authors showed that CMA-MEGA approximates a natural gradient step of the QD objective (Eq. 1), but with respect to the gradient coefficients.
CMA-MAEGA. We note that our augmentations to the CMA-ME algorithm only affects how we replace solutions in the archive and how we calculate ∆i. Both CMA-ME and CMA-MAEGA replace solutions and calculate ∆i identically, so we apply the same augmentations from CMA-ME to CMA-MEGA to form a new DQD algorithm, CMA-MAEGA. Algorithm 2 shows the pseudo-code for CMA-MAEGA with the differences from CMA-MEGA highlighted in yellow.
Algorithm 2 Covariance Matrix Adaptation MAP-Annealing via a Gradient Arborescence (CMA-MAEGA) CMA-MAEGA (evaluate,θ0, N, λ, η, σg, minf , α)
input : An evaluation function evaluate that computes the objective, the measures, and the gradients of the objective and measures, an initial solution θ0, a desired number of iterations N , a branching population size λ, a learning rate η, an initial step size for CMA-ES σg , a minimal acceptable solution quality minf , and an archive learning rate α.
result :Generate N(λ+ 1) solutions storing elites in an archive A. 1 Initialize solution parameters θ to θ0, CMA-ES parameters µ = 0, Σ = σgI , and p, where we let p be the CMA-ES internal parameters. 2 Initialize the archive A and the acceptance threshold te with minf for each cell e. 3 for iter ← 1 to N do 4 f,∇f ,m,∇m ← evaluate(θ) 5 ∇f ← normalize(∇f ),∇m ← normalize(∇m) 6 if f > te then 7 Replace the current elite in cell e of the archive A with θi 8 te ← (1− α)te + αf 9 end
10 for i← 1 to λ do 11 c ∼ N (µ,Σ) 12 ∇i ← c0∇f + ∑k j=1 cj∇mj 13 θ′i ← θ +∇i 14 f ′, ∗,m′, ∗ ← evaluate(θ′i) 15 ∆i ← f ′ − te 16 if f ′ > te then 17 Replace the current occupant in cell e of the archive A with θi 18 te ← (1− α)te + αf ′ 19 end 20 end 21 rank ∇i by ∆i 22 ∇step ← ∑λ i=1 wi∇rank[i] 23 θ ← θ + η∇step 24 Adapt CMA-ES parameters µ,Σ,p based on improvement ranking ∆i 25 if there is no change in the archive then 26 Restart CMA-ES with µ = 0,Σ = σgI . 27 Set θ to a randomly selected existing cell θi from the archive 28 end 29 end
Experiments. We compare CMA-MEGA and CMA-MAEGA in the five benchmark domains. Table 3 and Table 4 shows the QD-score and coverage values for each algorithm and domain, averaged over 20 trials for the linear projection (LP) and arm repertoire domains and over 5 trials for the LSI domains. We conducted a two-way ANOVA to examine the effect of the algorithm and domain (LP (sphere), LP (Rastrigin), LP (plateau), arm repertoire, LSI (StyleGAN), and LSI (StyleGAN2) on the QD-score. There was a significant interaction between the search algorithm and the domain (F (5, 168) = 165.7, p < 0.001). Simple main effects analysis with Bonferroni corrections showed that CMAMAEGA outperformed CMA-MEGA in the LP (sphere), arm repertoire, and LSI (StyleGAN2) domains. There was no statistically significance difference between the two algorithms in the LP (Rastrigin), LP (plateau), and LSI (StyleGAN) domains.
We attribute the absence of a statistical difference in the QD-score between the two algorithms on the LP (Rastrigin) and LP (plateau) domains on the perfect coverage obtained by both algorithms. Thus, any differences in QD-score are based on the objective values of the solutions returned by each algorithm. In LP (plateau), the optimal objective for each cell is easily obtainable for both methods. The LP (Rastrigin) domain contains many local optima, because of the form of the objective function
(Eq. 5). CMA-MEGA will converge to these optima before restarting, behaving as a single-objective optimizer within each local optimum. Because of the large number of local optima in the domain, it results in higher QD-score.
In the LSI (StyleGAN) domain, we attribute similar performance between CMA-MEGA and CMA-MAEGA to the restart rules used to keep each search within the training distribution of StyleGAN. On the other hand, in the LSI (StyleGAN2) domain, we regularize the search space by an L2 penalty in latent space, allowing for a larger learning rate and a basic restart rule for both algorithms, while still preventing drift out of the training distribution of StyleGAN2. Because of the fewer restarts, CMA-MAEGA can take advantage of the density descent property, which was shown to improve exploation in CMA-MAE, and outperform CMA-MEGA. We note that because StyleGAN2 has a better conditioning on the latent space (Karras et al., 2020b), it is better suited for gradient-based optimizers, which helps better distinguish between the two algorithms.
E THEORETICAL PROPERTIES OF CMA-MAE
Theorem E.1. The CMA-ES algorithm is equivalent to CMA-MAE when α = 0, if CMA-ES restarts from an archive solution.
Proof. CMA-ES and CMA-MAE differ only on how they rank solutions. CMA-ES ranks solutions purely based on the objective f , while CMA-MAE ranks solutions by f − te, where te is the acceptance threshold initialized by minf . Thus, to show that CMA-ES is equivalent to CMA-MAE for α = 0, we only need to show that they result in identical rankings.
In CMA-MAE, te is updated as follows: te ← (1− α)te + αf . For α = 0, te = minf is invariant for the whole algorithm: te ← 1te + 0f = te. Therefore, CMA-MAE ranks solutions based on f −minf . However, comparison-based sorting is invariant to order-preserving transformations of the values being sorted Hansen (2016). Thus, CMA-ES and CMA-MAE rank solutions identically.
Next, we prove that CMA-ME is equivalent to CMA-MAE with the following caveats. First, we assume that CMA-ME restarts only with the CMA-ES restart rules, rather than the additional “no improvement” restart condition from Fontaine et al. (2020). Second, we assume that both CMA-ME and CMA-MAE leverage µ selection rather than filtering selection.
Lemma E.2. During execution of the CMA-MAE algorithm with α = 1, the threshold te is equal to f(θe) for cells that are occupied by a solution θe and to minf for all empty cells.
Proof. We will prove the lemma by induction. All empty cells are initialized with te = minf , satisfying the basis step. Then, we will show that if the statement holds after k archive updates, it will hold after a subsequent update k + 1.
Assume that at step k we generate a new solution θi mapped to a cell e. We consider two cases:
Case 1: The archive cell e is empty. Then, f(θi) > minf and both CMA-ME and CMA-MAE will place θi in the archive as the new cell occupant θe. The threshold te is updated as te = (1− α)te + αf(θe) = 0minf + 1f(θe) = f(θe). Case 2: The archive cell e contains an incumbent solution θe. Then, either f(θi) ≤ f(θe) or f(θi) > f(θe). If f(θi) ≤ f(θe), then the archive does not change and the inductive step holds via the inductive hypothesis. If f(θi) > f(θe), then θi becomes the new cell occupant θe and te is updated as te = (1− α)te + αf(θe) = 0te + 1f(θe) = f(θe).
Theorem E.3. The CMA-ME algorithm is equivalent to CMA-MAE when α = 1 and minf is an arbitrarily large negative number.
Proof. Both CMA-ME and CMA-MAE rank candidate solutions θi based on improvement values ∆i. While CMA-ME and CMA-MAE compute ∆i differently, we will show that for α = 1, the rankings are identical for the two algorithms.
We assume a new candidate solution mapped to a cell e. We describe first the computation of ∆i for CMA-ME. CMA-ME ranks solutions that discover an empty cell based on their objective value. Thus,
if θi discovers an empty cell, ∆i = f(θi). On the other hand, if θi is mapped to a cell occupied by another solution θe, it will rank θi based on the improvement ∆i = f(θi)− f(θe). CMA-ME performs a two-stage ranking, where it ranks all solutions that discover empty cells before solutions that improve occupied cells.
We now show the computation of ∆i for CMA-MAE with α = 1. If θi discovers an empty cell ∆i = f(θi) − te and by Lemma E.2 ∆i = f(θi) −minf . If θi is mapped to a cell occupied by another solution θe, ∆i = f(θi)− te and by Lemma E.2 ∆i = f(θi)− f(θe). Comparing the values ∆i between the two algorithms we observe the following: (1) If θi discovers an empty cell, ∆i = f(θi)−minf for CMA-MAE. However, minf is a constant and comparisonbased sorting is invariant to order preserving transformations (Hansen, 2016), thus ranking by ∆i = f(θi) − minf is identical to ranking by ∆i = f(θi) performed by CMA-ME. (2) If θi is mapped to a cell occupied by another solution θe, ∆i = f(θi) − f(θe) for both algorithms. (3) Because minf is an arbitrarily large negative number f(θi) −minf > f(θi) − f(θe). Thus, CMA-MAE will always rank solutions that discover empty cells before solutions that are mapped to occupied cells, identically to CMA-ME.
We next provide theoretical insights on how the discount function fA smoothly increases from a constant function minf to CMA-ME’s discount function as α increases from 0 to 1. We show this for the special case of a fixed sequence of candidate solutions. Theorem E.4. Let αi and αj be two archive learning rates for archives Ai and Aj such that 0 ≤ αi < αj ≤ 1. For two runs of CMA-MAE that generate the same sequence of m candidate solutions {S} = θ1,θ2, ...,θm, it follows that fAi(θ) ≤ fAj (θ) for all θ ∈ Rn.
Proof. We prove the theorem via induction over the sequence of solution additions. fA is the histogram formed by the thresholds te over all archive cells e in the archive. Thus, we prove fAi ≤ fAj by showing that te(Ai) ≤ te(Aj) for all archive cells e after m archive additions. As a basis step, we note that Ai equals Aj as both archives are initialized with minf .
Our inductive hypothesis states that after k archive additions we have te(Ai) ≤ te(Aj), and we need to show that te(Ai) ≤ te(Aj) after solution θk+1 is added to each archive. Our solution θk+1 has three cases with respect to the acceptance thresholds:
Case 1: f(θk+1) ≤ te(Ai) ≤ te(Aj). The solution is not added to either archive and our property holds from the inductive hypothesis.
Case 2: te(Ai) ≤ f(θk+1) ≤ te(Aj). The solution is added to Ai, but not Aj , thus t′e(Aj) = te(Aj). We follow the threshold update: t′e(Ai) = (1− αi)te(Ai) + αif(θk+1). Next, we need to show that t′e(Ai) ≤ t′e(Aj) to complete the inductive step:
(1− αi)te(Ai) + αif(θk+1) ≤ f(θk+1) ⇐⇒ (1− αi)te(Ai) ≤ (1− αi)f(θk+1) ⇐⇒
te(Ai) ≤ f(θk+1) as 1− αi ≥ 0
The last inequality holds true per our initial assumption for Case 2. From the inductive hypothesis, we have f(θk+1) ≤ te(Aj) = t′e(Aj). Case 3: te(Ai) ≤ te(Aj) ≤ f(θk+1). The solution θk+1 is added to both archives. We need to show that t′e(Ai) ≤ t′e(Aj):
t′e(Ai) ≤ t′e(Aj) ⇐⇒ (1− αi)te(Ai) + αif(θk+1) ≤ (1− αj)te(Aj) + αjf(θk+1) (9)
We can rewrite Eq. 9 as:
(1− αj)te(Aj)− (1− αi)te(Ai) + αjf(θk+1)− αif(θk+1) ≥ 0 (10)
First, note that:
(1− αj)te(Aj)− (1− αi)te(Ai) ≥ (1− αj)te(Ai)− (1− αi)te(Ai) = (1− αj − 1 + αi)te(Ai) = (αi − αj)te(Ai).
Thus: (1− αj)te(Aj)− (1− αi)te(Ai) ≥ (αi − αj)te(Ai) (11)
From Eq. 10 and 11 we have:
(1− αj)te(Aj) + αjf(θk+1)− (1− αi)te(Ai)− αif(θk+1) ≥ (αi − αj)te(Ai) + (αj − αi)f(θk+1) = (αj − αi)(f(θk+1)− te(Ai))
As αj > αi and f(θk+1) ≥ te(Ai), we have (αj − αi)(f(θk+1) − te(Ai)) ≥ 0. This completes the proof that Eq. 10 holds.
As all cases in our inductive step hold, our proof by induction is complete.
Next, we wish to provide insights about the exploration properties of CMA-MAE for an archive learning rate α between 0 and 1, when the objective f is constant. Consider an approximate density descent algorithm that is identical to CMA-ME, but differs by how solutions are ranked. Specifically, the algorithm maintains a histogram of occupancy counts oe for each cell e, with oe representing the number of times a solution was generated in that cell. This algorithm descends the density histogram by ranking solutions based on the occupancy count of the cell that the solution maps to, where solutions that discover less frequently visited cells are ranked higher.
Lemma E.5. The threshold te after k additions to cell e forms a strictly increasing sequence for a constant objective function f(θ) = C for all θ ∈ Rn, when 0 < α < 1 and minf < C.
Proof. To show that te after k additions to cell e forms a strictly increasing sequence, we write a recurrence relation for te after k solutions have been added to cell e. Let te(k) = (1 − α)te(k − 1) + αf(θi) and te(0) = minf be that recurrence relation. To show the recurrence is an increasing function, we need to show that te(k) > te(k − 1) for all k ≥ 0. We prove the inequality via induction over cell additions k. As a basis step, we show te(1) > te(0): (1− α)minf + αC > minf ⇐⇒ minf −minf − α ·minf + αC ⇐⇒ αC > α ·minf . As C > minf and α > 0, the basis step holds.
For the inductive step, we assume that te(k) > te(k − 1) and need to show that te(k + 1) > te(k): te(k + 1) > te(k) ⇐⇒ (1 − α)te(k) + αC > (1 − α)te(k − 1) + αC ⇐⇒ (1 − α)te(k) > (1− α)te(k − 1) ⇐⇒ te(k) > te(k − 1).
Theorem E.6. The CMA-MAE algorithm optimizing a constant objective function f(θ) = C for all θ ∈ Rn is equivalent to the approximate density descent algorithm, when 0 < α < 1 and minf < C.
Proof. We will prove that for an arbitrary archive A with both the occupancy count for each cell oe and the threshold value te computed with arbitrary learning rate 0 < α < 1, CMA-MAE results in the same ranking for an arbitrary batch of solutions {θi} as the approximate density descent algorithm. We let θi and θj be two arbitrary solutions in the batch mapped to cells ei and ej . Without of loss of generality, we let oei ≤ oej . The approximate density descent algorithm will thus rank θi before θj . We will show that CMA-MAE results in the same ranking.
If oei ≤ oej , and since te is a strictly increasing function from Lemma E.5: tei(oei) ≤ tej (oej ). We have tei(oei) ≤ tej (oej ) ⇐⇒ C − tei(oei) ≥ C − tej (oej ). Thus, the archive improvement by adding θi to the archive is larger than the improvement by adding θj and CMA-MAE will rank θi higher than θj , identically with density descent.
While Theorem E.6 assumes a constant objective f , we conjecture that the theorem holds true generally when threshold te in each cell e approaches the local optimum within the cell boundaries.
Conjecture E.7. The CMA-MAE algorithm becomes equivalent to the density descent algorithm for a subset of archive cells for an arbitrary convex objective f , where the cardinality of the subset of cells increases as the number of iterations increases.
We provide intuition for our conjecture through the lense of the elite hypervolume hypothesis (Vassiliades & Mouret, 2018). The elite hypervolume hypothesis states that optimal solutions for the MAP-Elites archive form a connected region in search space. Later work (Rakicevic et al., 2021), connected the elite hypervolume hypothesis to the manifold hypothesis (Fefferman et al., 2016) in machine learning, stating that the elite hypervolume can be represented by a low dimensional manifold in search space.
For our conjecture, we assume that the elite hypervolume hypothesis holds and there exists a smooth manifold that represents the hypervolume. Next, we assume in the conjecture that f is an arbitrary convex function. As f is convex, early in the CMA-MAE search the discount function fA will be flat and the search point θ will approach the global optimum following CMA-ES’s convergence properties (Hansen & Ostermeier, 1997; Hansen et al., 2003), where the precision of convergence is controlled by archive learning rate α. By definition, the global optimum θ∗ is within the elite hypervolume as no other solution of higher quality exists within its archive cell. Assuming the elite hypervolume hypothesis holds, a subset of adjacent solutions in search space will also be in the hypervolume due to the connectedness of the hypervolume. As fA increases around the global optimum, we conjecture that the function f(θ∗)− fA(θ∗) will form a plateau around the optimum, since it will approach the value f(θi)− fA(θi) of adjacent solutions θi. By Theorem E.6 we have a density descent algorithm within the plateau, pushing CMA-MAE to discover solutions on the frontier of the known hypervolume.
Finally, we remark that our conjecture implies that f − fA tends towards a constant function in the limit, resulting in a density descent algorithm across the elite hypervolume manifold as the number of generated solutions approaches infinity. We leave a formal proof of this conjecture for future work.
F THEORETICAL PROPERTIES OF CMA-MAEGA
In this section, we investigate how the theoretical properties of CMA-MAE apply to CMA-MAEGA. While many of the properties are nearly a direct mapping, we note that, while CMA-MAE is equivalent to the single-objective optimization algorithm CMA-ES for α = 0, there is no single-objective counterpart to CMA-MAEGA. To make the direct mapping easier, we introduce a counterpart: the gradient arborescence ascent algorithm.
The gradient arborescence ascent algorithm is similar to CMA-MEGA, but without an archive. Like CMA-MEGA, the algorithm assumes a differentiable objective f and differentiable measures m. However, the algorithm leverages the objective and measure function gradients only to improve the optimization of the objective f , rather than to find solutions that are diverse with respect to measures m. As with CMA-MEGA, the gradient arborescence algorithm branches in objective-measure space. However, the algorithm ranks solutions purely by the objective function f and adapts the coefficient distribution N(µ,Σ) towards the natural gradient of the objective f .
Next, we prove properties of CMA-MAEGA that directly follow from the properties of CMA-MAE.
Theorem F.1. The gradient arborescence ascent algorithm is equivalent to CMA-MAEGA when α = 0, if gradient arborescence ascent restarts from an archive elite.
Proof. We note that CMA-MAEGA and the gradient arborescence ascent algorithm differ only in how they rank solutions, and we note that the differences between CMA-MAE and CMA-ES mirror the differences between CMA-MAEGA and gradient arborescence ascent algorithm. So by directly adapting the proof of Theorem E.1, we complete our proof.
Theorem F.2. The CMA-MEGA algorithm is equivalent to CMA-MAEGA when α = 1 and minf is an arbitrarily large negative number.
Proof. We note that CMA-MAEGA and the CMA-MEGA algorithm differ only in how they rank solutions and how they update the archive A, and we note that the differences between CMA-MAE and CMA-ME mirror the differences between CMA-MAEGA and CMA-MEGA. So by directly adapting the proof of Theorem E.3, we complete our proof.
Theorem F.3. Let αi and αj be two archive learning rates for archives Ai and Aj such that 0 ≤ αi < αj ≤ 1. For two runs of CMA-MAEGA that generate the same sequence of m candidate solutions {S} = θ1,θ2, ...,θm, it follows that fAi(θ) ≤ fAj (θ) for all θ ∈ Rn.
Proof. We note that CMA-MAE and CMA-MAEGA update the archive A in exactly the same way. Therefore, the proof follows directly by adapting the proof of Theorem E.4 to CMA-MAEGA.
Next, we wish to show that CMA-MAEGA results in density descent in measure space. However, we need a counterpart to the approximate density descent algorithm we defined in Theorem E.6.
Consider an approximate density descending arborescence algorithm that is identical to CMA-MEGA, but differs by how solutions are ranked. Specifically, we assume that this algorithm maintains an occupancy count oe for each cell e, which is the number of times a solution was generated in that cell. The density descent algorithm ranks solutions based on the occupancy count of the cell that the solution maps to, where solutions that discover less frequently visited cells are ranked higher. The algorithm takes steps in search space Rn that minimize the approximate density function defined by the archive and adapts the coefficient distribution N(µ,Σ) towards coefficients that minimize the density function. Theorem F.4. The CMA-MAEGA algorithm optimizing a constant objective function f(θ) = C for all θ ∈ Rn is equivalent to the approximate density descending arborescence algorithm, when 0 < α < 1 and minf < C.
Proof. The proof of Theorem E.6 relies only on how CMA-MAE updates the archive A and acceptance threshold te. The proof of this theorem follows directly by adapting the proof of Theorem E.6 to CMA-MAEGA.
G DERIVATION OF THE CONVERSION FORMULA FOR THE ARCHIVE LEARNING RATE
In this section, we derive the archive learning rate conversion formula α2 = 1− (1−α1)r mentioned in Section 7 of the main paper, where r is the ratio between archive cell counts, and α1 and α2 are archive learning rates for two archives A1 and A2.
Given an archive learning rate α1 for A1, we want to derive an equivalent archive learning rate α2 for A2 that results in robust performance when CMA-MAE is run with either A1 or A2. A principled way to derive a conversion formula for α2 is to look for an invariance property that affects the performance of CMA-MAE and that holds when CMA-MAE generates solutions in archives A1 and A2.
Since CMA-MAE ranks solutions by f − fA, we wish for fA to increase at the same rate in the two archives. Since fA(θ) = te, where te is the cell that a solution θ maps to, we select the average value of the acceptance thresholds te over all cells in each archive as our invariant property.
We assume an arbitrary sequence of N solution additions θ1,θ2, ...,θN , evenly dispersed across the archive cells. We then specify te as a function that maps k cell additions to a value te in archive cell e.3 Equation 12 then defines the average value of te across the archive after N additions to an archive A with M cells.
1
M M∑ i=1 te ( N M ) (12)
Then, equation 13 defines the invariance we want to guarantee between archives A1 and A2. 3Here we abuse notation and view te as a function instead of threshold for simplicity and to highlight the connection to the threshold value te.
1 M1 M1∑ i=1 te ( | 1. What is the focus and contribution of the paper on quality diversity algorithms?
2. What are the strengths of the proposed approach, particularly in its performance and robustness?
3. What are the weaknesses of the paper, especially regarding the incremental nature of the proposed method and the limited convincing experiments?
4. Do you have any concerns or suggestions for improving the experimental results and ablation studies?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper introduces a new quality diversity algorithm, CMA-MAE, that try to address some limitations of the baseline algorithm CMA-ME. The authors introduce a learning rate
α
into the acceptance threshold, which encourages CMA-MAE to spend more time on the promising zones before transitioning to exploration. The performance of the proposed algorithm is evaluated in three QD benchmark domains and the results show that the proposed algorithm outperforms state-of-the-art QD algorithms.
Strengths And Weaknesses
Strength
The paper is overall well written and easy to understand.
The experimental results show that the proposed method is robust and superior to other QD algorithms.
Weaknesses
The proposed method is very incremental. Compared with CMA-ME, it just adds the smoothing factor to update the acceptance threshold.
The experiments are not very convincing. The authors claimed that the modification allows CMA-MAE to spend more time on the promising zones before transitioning to exploration, thus can make CMA-MAE do better on domains with objectives that are hard to optimize as well as domains with flat objective functions. However, in the experiments, in addition to the common benchmarks, the authors only tested the methods on a simple function with the flat property, which makes the results not very convincing. I suggest the authors to conduct experiments on more problems with the mentioned properties.
More ablation studies are needed. The performance of CMA-MAE depends on not only the setting of
α
but also that of
m
i
n
f
. What is the impact of the setting of
m
i
n
f
on the performance?
There are some minor problems, for example, in the section on limitations and future work, the order of the two paragraphs may be wrong.
Clarity, Quality, Novelty And Reproducibility
The clarity, quality and reproducibility are good, but the novelty is very limited. |
ICLR | Title
$\ell_1$ Adversarial Robustness Certificates: a Randomized Smoothing Approach
Abstract
Robustness is an important property to guarantee the security of machine learning models. It has recently been demonstrated that strong robustness certificates can be obtained on ensemble classifiers generated by input randomization. However, tight robustness certificates are only known for symmetric norms including `0 and `2, while for asymmetric norms like `1, the existing techniques do not apply. By converting the likelihood ratio into a one dimensional mixed random variable, we derive the first tight `1 robustness certificate under isotropic Laplace distributions in binary case. Empirically, the deep networks smoothed by Laplace distributions yield the state-of-the-art certified robustness in `1 norm on CIFAR-10 and ImageNet.
1 INTRODUCTION
have done a series of nice works in practical sights or theoretical sights (Zheng et al., 2016; Gouk et al., 2018). Among them, certifiably robustness is valuable, since it can withstand all attacks within a norm ball and has a nice theoretical and practical outcome. However, most work cannot deal with the case for general neural networks.
Deep networks are flexible models that are widely adopted in various applications. However, it has been shown that such models are vulnerable against adversary (Szegedy et al., 2014). Concretely, an unnoticeable small perturbation on the input can cause a typical deep model to change predictions arbitrarily. The phenomenon raises the concerns of the security of deep models, and hinders its deployment in decision-critical applications. Indeed, the certification of robustness is a pre-requisite when AI-generated decisions may have important consequences.
Certifying the robustness of a machine learning model is challenging, especially for modern deep learning models that are over-parameterized and effectively black-box. Hence, the existing approaches mainly rely on empirical demonstration against specific adversarial attack algorithms (Goodfellow et al., 2015; Madry et al., 2018; Finlay et al., 2019). However, this line of works can give a false sense of security. Indeed, successful defense against the existing attack algorithms does not guarantee actual robustness against any adversaries that may appear in the future.
Recently, the adversarial robustness community has shifted the focus towards establishing certificates that prove the robustness of deep learning models. The certificate can be either exact or conservative, so long as the certified region cannot exhibit any adversarial examples. Given the over-parameterized deep models and modern high-dimensional datasets, scalability becomes a key property for the certification algorithms, as many methods are computationally intractable.
Our work is based on the novel modeling scheme that generates ensembles of a fixed black-box classifier based on input randomization (Cohen et al., 2019). Under this framework, tight robustness certificates can be obtained with only the ensemble prediction values and randomization parameters. Given appropriate choices of distributions, the robustness guarantee can be derived for `2 or `0 norms (Cohen et al., 2019; Lee et al., 2019). The tightness simply implies that any point outside the certified region is an adversarial example in the worst case. However, the derivations of the previous results heavily relies on the fact that the target norm (`2 or `0) is symmetric, therefore analyzing any perturbation direction for attacking the model gives the same certification guarantee.
In contrast, `1 norm is asymmetric. That is, for a given `1 ball centered at the origin, if we move another `1 ball also from the origin by a distance δ, where ‖δ‖1 is fixed, then the overlapped region
between the two `1 balls may have different shapes and sizes (See Figure 1). The characterization of this overlapped region is the key step for proving tight certificates, hence the existing techniques do not apply for `1 norm.
In this work, we derive a tight `1 robustness guarantee under isotropic Laplace distributions. The Laplace distribution can be interpreted as an infinite mixture of uniform distributions over `1-norm balls, which is a natural “conjugate” distribution for `1 norm. Due to asymmetry, we first identified the tight robustness certificate for attacking the model in one particular direction, δ = (‖δ‖1, 0, · · · , 0). To show that other perturbation directions cannot lead to worse results, we convert the d dimensional likelihood function into an one dimensional function, where we apply relaxation for various δ and show that the worst case result is bounded by the specific direction (‖δ‖1, 0, · · · , 0). Theoretically, our certificate is tight in the binary classification setting. In the multi-class classification setting, our certificate is always tighter than the previous certificate proposed by Lecuyer et al. (2019). The theoretical improvement always leads to superior empirical results on certifying the same model, where we demonstrate the result on CIFAR-10 and ImageNet with ResNet models. Moreover, the proposed robustness certificate on models smoothed by Laplace distributions also outperforms the same models trained and certified using Gaussian distributions (Cohen et al., 2019) in `1 certified robustness, where the Gaussian-based robustness certificate is adapted from `2 norm.
2 RELATED WORK
Robustness of a model can be defined in various aspects. For example, Feynman-Kac Formalism can be used to improve robustness (Wang et al., 2018). In this paper, we focus on the classification setting, where the goal is to provide guarantee of a constant prediction among a small region specified via some metric. The robustness certificate can be either exact or conservative, so long as a constant prediction is guaranteed in the certified region. Note that the certification of a completely black-box model requires checking the prediction values at every point around the point of interest, which is clearly infeasible. A practical certification algorithm inevitably has to specify and leverage the functional structure of the classifier in use to reduce the required computation.
Exact certificates. The exact certificate of deep networks is typically derived for the networks with a piecewise linear activation function such as ReLU. Such networks have an equivalent mixed integer linear representation (Cheng et al., 2017; Lomuscio & Maganti, 2017; Dutta et al., 2017; Bunel et al., 2018). Hence, one may apply mixed integer linear programming to find the worst case adversary within any convex polyhedron such as an `1-ball or `∞-ball. Despite the elegant solution, the complexity is, in general, NP-hard and the algorithms are not scalable to large problems(Tjeng et al., 2017).
Conservative certificates. A conservative certificate can be more scalable than the exact methods, since one can trade-off the accuracy of certification with efficiency (Gouk et al., 2018; Tsuzuku et al., 2018; Cisse et al., 2017; Anil et al., 2018; Hein & Andriushchenko, 2017). For example, one can relax the search of the worst case adversary as a simpler optimization problem that only bounds the effect of such adversary. Alternatively, people also consider the robustness problem in a modular way, where the robustness guarantee can be derived iteratively for each layer in the deep networks by considering the feasible values for each hidden layer (Gowal et al., 2018; Weng et al., 2018; Zhang et al., 2018; Mirman et al., 2018; Singh et al., 2018). However, this line of works have not yet been demonstrated to be feasible to realistic networks in high dimensional problems like ImageNet.
Randomized smoothing. Randomized smoothing has been proved to be closely related to robustness. Although similar techniques have been tried by (Liu et al., 2018; Cao & Gong, 2017), no corresponding proofs have been given; Li et al. (2018) and Cohen et al. (2019) have proved certified robustness of `2 norm under isotropic Gaussian noise, and Lee et al. (2019) proved robustness for `0 form. Lecuyer et al. (2019) use techniques from differential privacy to prove `1 robustness under Gaussian and Laplace noise respectively, but the bounds are not tight. Li et al. (2018); Pinot et al. (2019) use Rényi divergence framework without tightness proof. Our results synthesize the ideas in (Cohen et al., 2019; Lee et al., 2019; Lecuyer et al., 2019; Li et al., 2018; Pinot et al., 2019) and prove the tight robustness radius under the binary classification setting.
3 PRELIMINARIES
Definition 1 (Laplace distribution) Given λ ∈ R+, d ∈ Z+, we use L(λ) to denote the Laplace distribution in dimension d with parameter λ. The p.d.f. of L(λ) is denoted as L(x;λ) ,
1 (2λ)d exp(−‖x‖1λ ).
As we will see in Lemma 3.1, in smoothing analysis, we are interested in the likelihood ratio of two random variables X = and Y = δ + (here ∼ L(λ) and δ ∈ Rd is a fixed vector). Specifically,
µY (x) µX(x) = exp ( − 1 λ (‖x− δ‖1 − ‖x‖1) )
Therefore, the likelihood ratio between two d dimensional random variables is controlled by a one dimensional random variable T (x) , ‖x − δ‖1 − ‖x‖1, where x ∼ L(λ). This transformation is crucial in our analysis, and it is easy to see that T (x) is a mixed random variable, since Px(T (x) = ‖δ‖1) > 0. In our analysis, we need to calculate the inverse of c.d.f. of T (x). However, since T (x) is a mixed random variable, sometimes the inverse may not exist. See Figure 3 for illustration, where the inverse of the probability 0.85 does not exist. To deal with this case, we have the following modified version of Neyman-Pearson Lemma, with the proof in Appendix A.
Lemma 3.1 (Neyman-Pearson Lemma for mixed random variables). Let X ∼ L (λ) and Y ∼ L (λ) + δ. Let h : Rd → {0, 1} be any deterministic or random function. Given any β ∈ R, and S′ ⊆ { z ∈ Rd : ‖z − δ‖1 − ‖z‖1 = β } :
1. If S = { z ∈ Rd : ‖z − δ‖1 − ‖z‖1 > β } ∪ S′, and P(h(X) = 1) ≥ P(X ∈ S) then P(h(Y ) = 1) ≥ P(Y ∈ S)
2. If S = { z ∈ Rd : ‖z − δ‖1 − ‖z‖1 < β } ∪S′, and P(h(X) = 1) ≤ P(X ∈ S), then P(h(Y ) = 1) ≤ P(Y ∈ S)
4 MAIN RESULTS
In this paper, we apply the randomized smoothing technique (Cohen et al., 2019) for getting robustness certificates, which works as follows. Given an input x, we perturb it with , s.t. ∼ L(λ). Then instead of evaluating the robustness of the original function f(x), we evaluate g(x) , arg maxc P (f(x+ ) = c), which is effectively the smoothed version of f(x).
4.1 ROBUSTNESS CERTIFICATES FOR GENERAL CASES
Our first theorem proves that for the smoothed classifier g, and a given input x, there always exists a robust radius R, such that any perturbation δ s.t. ‖δ‖1 ≤ R, does not alter the prediction of g(x).
Theorem 1 Let f : Rd → Y be deterministic or random function, and let ∼ L(λ). Let g(x) = arg maxc P (f(x+ ) = c). Suppose PA, PB ∈ [0, 1] are such that
P (f(x+ ) = cA) ≥ PA ≥ PB ≥ max c 6=cA P(f(x+ ) = c)
Then g(x+ δ) = g(x),∀‖δ‖1 ≤ R, where
R = max
{ λ
2 log(PA/PB),−λ log(1− PA + PB)
} (1)
Some Remarks:
1. When PA → 1 or PB → 0, we can get R → ∞. It is reasonable since the Laplace distribution is supported over Rd, PA → 1 is equivalent to f = cA almost everywhere.
2. Compared with (Lecuyer et al., 2019) where they have R = λ2 log(PA/PB), our bound
is better if 1−2PA(1−PA)−
√ 1−4PA(1−PA)
2PA ≤ PB ≤
1−2PA(1−PA)+ √
1−4PA(1−PA) 2PA . See
Figure 4 for illustration, where we use baseline to denote the bound R = λ2 log(PA/PB).
Proof sketch: (The full proof is in Appendix B) For arbitrarily classifier f , we can transform it into a random smoothing classifier g using random smoothing technique, where g returns class cA with probability no less than PA, and class cB with probability no more than PB . Below we list the three main ideas we used in our proof:
1. How to deal with an arbitrary f with PA and PB?
Following Cohen et al. (2019), we use Neyman-Pearson Lemma to transform the relation between P(f(X) = cA) and P(f(Y ) = cA) into the relation between P(X ∈ A) and P(Y ∈ A). From Lemma 3.1, Neyman-Pearson Lemma still holds for mixed random variables.
2. How to deal with the relation between X = and Y = + δ?
Inspired by Lecuyer et al. (2019), we use the DP-form inequality (P (Y ∈ A) ≤ e P (X ∈ A)) to deal with the relation between P (X ∈ A) and P (Y ∈ A). In Laplace distribution, = ‖δ‖1λ .
3. Take complements to get tighter bound.
When PA or B < 1/2, the above DP-form inequality gets tighter. Therefore, we analyze Ac when PA ≥ 1/2 to get a new bound, and compare it with the baseline expression. We derive this bound by Neyman-Pearson Lemma in this work, but an alternative approach is using Rényi Divergence (Li et al., 2018).
4.2 TIGHT ROBUSTNESS CERTIFICATES FOR BINARY CASE
Although we get improved result over Lecuyer et al. (2019), the bound in Theorem 1 is not tight since it considers the general case with multiple categories. In this section, we first present our result for binary classification (Theorem 2), which further improves over Theorem 1.
Theorem 2 (binary case) Let f : Rd → Y be deterministic or random function, and let ∼ L(λ). Let g(x) = arg maxc P (f(x + ) = c). Suppose there are only two classes cA and cB , and PA ∈ [ 12 , 1] s.t. P (f(x+ ) = cA) ≥ PA Then g(x+ δ) = g(x),∀‖δ‖1 ≤ R, for
R = −λ log[2(1− PA)] (2)
Scretch of the proof: (The full proof is in Appendix C) Theorem 2 is a special binary case of Theorem 1. We can use a method similar to Theorem 1 to get the results. However, it is worth noting that in binary cases, our new improved bound in Theorem 1 always dominates the bound by Lecuyer et al. (2019). Moreover, our bound in Eqn. (2) is tight, as shown below.
Theorem 3 (tight bound in binary case) In the same setting as Theorem 2, assume PA + PB ≤ 1 and PA ≥ 12 . ∀R
′ > −λ log[2(1 − PA)], ∃ base classifier f∗ and perturbation δ∗ with g∗(x) = arg maxc P (f ∗(x+ ) = c) and ‖δ‖1 = R′, s.t. g∗(x) 6= g∗(x+ δ∗).
Scretch of the proof:(The full proof is in Appendix C) For Theorem 3, we prove that the bound in Theorem 2 is tight by calculating the results in one-dimensional case, where δ = (‖δ‖1, 0, . . . , 0). By calculating, we show that when δ = (‖δ‖1, 0, . . . , 0)
P(Y ∈ B) = ∫ ‖δ‖1+λ log[2PB ] −∞ 1 2λ exp (−|x| λ )dx
= { exp(‖δ‖1λ )PB when ‖δ‖1 ≤ −λ log[2PB ] 1− 1
4PB exp(−‖δ‖1λ ) o.w.
Therefore, when ‖δ‖1 ≤ −λ log[2PB ], the DP-inequality is tight. The worst-case δ appears in the one-dimension case.
Figure 5 shows the reason why the inequality is tight. When δ is small, for P(X ∈ B), the set B we selected satisfies ∀x ∈ B, T (x) = −‖δ‖1 (red part). When P(Y ∈ B) is considered, it moves set S towards left by step δ. However, due to the small δ, S after moving still satisfies the requirement of ∀x ∈ S, T (x) = −‖δ‖1 (blue part). Therefore, the inequality is tight.
4.3 METHOD COMPARISON
We compared our method with Cohen et al.’s and Lecuyer et al.’s in binary case, see Table 1. We plot the curves in Figure 6. As we can see, under the same variance of each noise, our method can reach better robustness radius. Below we show simple derivations of the bounds in Table 1.
Robustness radius of Lecuyer et al. (2019)
Using the basic inequality from differential privacy, we have:
P (f(X) = cA) ≤ exp(β)P (f(Y ) = cA) P (f(Y ) = cB) ≤ exp(β)P (f(X) = cB)
where β = ‖δ‖1/λ. The above two inequalities show that to guarantee P (f(Y ) = cA) > P (f(Y ) = cB), it suffices to show that:
P (f(X) = cA) > exp(2β)P (f(X) = cB)
So plug in β = ‖δ‖1/λ, we have ‖δ‖1 ≤ λ2 log(PA/PB). Furthermore, in binary case, we can plug in PB = 1− PA, and get the robustness radius: R = λ2 log(PA/1− PA). Robustness radius of Cohen et al. (2019)
Denote Bp,r(c) = {x : ‖x − c‖p ≤ r}. Since we know that B1,r(c) ⊂ B2,r(c), so the radius in (Cohen et al., 2019) can be directly used in `1 form, which is σΦ−1(PA).
Besides, since B1,r+ (c) 6⊂ B2,r(c) whatever > 0 is. And (Cohen et al., 2019) is an exact robustness guarantee, so we have that the best `1 form that isotropic Gaussian noise random smoothing can get is σΦ−1(PA).
Finally we will prove that −λ log[2(1 − PA)] ≥ λ2 log(PA/1− PA). For simple denotion, we just set PA = p ≥ 0.5. So it is sufficient to show that−λ log[2(1−p)] ≥ λ2 log(p/(1−p)). By applying exponential operation, it suffices to show that 12(1−p) ≥ √ p 1−p , which is simply p(1− p) ≤ 1 4 .
5 EXPERIMENTS
5.1 IMPLEMENTATION DETAILS
Monte Carlo. Since we cannot get the exact value of PA, we have to use Monte Carlo method to get the approximate value of PA. More specifically, we take multiple random samples from the Laplace distribution to estimate PA. One way to do it is grouping the samples and get PA using non-parametric estimation.
In our experiments, we applied two different types of training, as described below.
Type1-Training: The first method is intuitive, and was applied in (Cohen et al., 2019). In the training process, we add into inputs:
inputs = inputs + noise
where the noise is sampled from isotropic Laplace distribution.
Type2-Training: The second method was recently proposed by Salman et al. (2019). The idea is to use adversarial noise samples instead of the raw noise samples in a neighborhood to train the base classifier. Each training sample can be decomposed to
inputs = inputs + noise + perturbation
where the noise comes from an isotropic Laplace distribution, and the perturbation is found approximately by the gradient of loss with respect to the input. Concretely, if we denote the loss as L and the input as x, the perturbation ∆ can be calculated by ∆ = a ∗ sign(∇xL(θ, x, y)), where a is a constant.
Evaluation Index. In this paper, we choose certified accuracy as our evaluation index. Robustness certified accuracy at radius r refers to the proportion of correctly classified samples with at least robustness radius r. Specifically, if a group of samples with capacity n is {xi}, i = 1, 2, . . . , n, its corresponding certified robustness radius is Ri. An index xi represent if the sample is classified correctly. If the sample is correctly classified, xi = 1, otherwise xi = 0. For a given r, the corresponding robustness certified accuracy is defined as α = ∑n i=1 xi1(Ri ≥ r)/n, where 1(·) is an indicator function.
However, from Section 5.1 we know that we cannot calculate the exact robustness radius R, so we use its R̂ to approximate R, which leads to a “approximate robustness certified accuracy”(α̂), which is calculated by
α̂ = n∑ i=1 xiI(R̂i ≥ r)/n (3)
Cohen et al. (2019) demonstrates that when significance level of R̂ is small, the difference between these two quantities is negligible. In practice, we plot approximate certified accuracy α̂ as a function of radius r. From Eqn. (3), we know that α̂ is non-increasing w.r.t. r. And when r →∞, α̂→ 0. Hyperparameters. In our paper, we set all our hyperparameters following Cohen et al. (2019). Specifically, we set significance level to 0.001. n0 = 100 in Monte Carlo simulation (used to get bound for α̂) and n = 100, 000 in estimation part (used to estimate α̂). Moreover, we test three parameters in CIFAR-10 dataset and ImageNet dataset (σ = 0.25, 0.50, 1.00). Since (Cohen et al., 2019) use Gaussian noise and we use Laplace noise, they should have the same standard deviation during comparison. This requires that σ = √ 2λ.
5.2 EXPERIMENTAL RESULTS
Results on ImageNet and CIFAR-10. We applied random smoothing on CIFAR-10 (Krizhevsky (2009)) and ImageNet (Deng et al. (2009)) respectively. On each data set, we trained several random smoothing models with differential standard deviation σ for Laplace noise. In order to keep in line with Cohen et al.’s method and make a comparison, we select σ = 0.25, 0.50, 1.00 on CIFAR-10, and ImageNet, corresponding parameter λ = σ/ √ 2.
Figure 6 draws the certified accuracy achieved by smoothing with each sigma. For the ImageNet dataset, we only use the most basic training method (Type1 Training). For the CIFAR-10 data set, we use two training methods (Type 1 and Type 2 Training). We can see that the smaller sigma performs better when the radius is smaller. As the noise gets bigger, the accuracy becomes lower, but the robustness guarantee becomes higher. The dashed black line shows the empirical robust accuracy of an undefended classifier from Cohen et al. (2019).
Comparison with baseline.
We will show our comparison results in the following. Based on Table. 1, we will test our method on CIFAR-10 with the ResNet110 architecture as well as Type1 and Type2 training, and ImageNet with ResNet50 architecture as well as Type1 training. We will compare our results with (Cohen et al., 2019) and (Lecuyer et al., 2019) under the same standard deviation σ. For base classifiers, ours and Lecuyer et al.’s share the same base classifier with Laplace training noise, and Cohen et al.’s uses the base classifier with Gaussian training noise.
6 CONCLUSION
In this paper, we combine the inequality from differential privacy and the classic Neyman-Pearson Lemma to resolve the challenging asymmetry of `1 metric and the mixed discrete-continuous property of the likelihood ratios under isotropic Laplace distributions. In addition, by comparing the high-dimensional case with a special edge case, we prove the tight `1 robustness guarantee for binary classification problems, and obtain the state-of-the-art certified accuracy in large scale experiments.
The establishment of `1 certificate via Laplace distributions and the prior result of `2 certificate via Gaussian distributions may be extended to a generic theorem for a general `p norm robustness certificate via the associated realization of the generalized Gaussian distribution, where the aforementioned results are special cases of the general scheme. The introduction of the mixed random variable analysis and `p geometry analysis may serve as a valuable extension of existing works towards such general goal.
A PROOF OF LEMMA 1
In this section, we will prove that Neyman-Pearson Lemma holds with mixed random variable.
WLOG, x = 0, X ∼ L (λ) and Y ∼ L (λ) +δ. We will firstly introduce Neyman-Pearson Lemma, which plays an important role in our proof.
Lemma 3.1 (restated).LetX ∼ L (λ) and Y ∼ L (λ)+δ. Let h : Rd → {0, 1} be any deterministic or random function. Given any β ∈ R, and S′ ⊆ { z ∈ Rd : ‖z − δ‖1 − ‖z‖1 = β } :
1. If S = { z ∈ Rd : ‖z − δ‖1 − ‖z‖1 > β } ∪ S′, and P(h(X) = 1) ≥ P(X ∈ S) then P(h(Y ) = 1) ≥ P(Y ∈ S)
2. If S = { z ∈ Rd : ‖z − δ‖1 − ‖z‖1 < β } ∪S′, and P(h(X) = 1) ≤ P(X ∈ S), then P(h(Y ) = 1) ≤ P(Y ∈ S) Proof of Lemma 3.1 First, notice that P(X ∈ S) can be regarded as a mixed random variable. We want to prove that as long as we can choose a S′ that satisfies P(X ∈ S) ≤ P(h(X) = 1), Neyman-Pearson Lemma can always hold.
Let’s first see what happens in the proof of Neyman-Pearson Lemma. Notice that X and Y are continuous variables, but X ∈ S and Y ∈ S can be regarded as mixed continuous-discrete event. Then we can choose a reasonable S′ for X and Y . We will prove case 1 and the other one can be proved with similar method.
P(h(Y ) = 1)− P(Y ∈ S)
= ∫ Rd h(1|z)µY (z)dz− ∫ S µY(z)dz
=[ ∫ Sc h(1|z)µY (z)dz + ∫ S h(1|z)µY(z)dz]− [ ∫ S h(1|z)µY(z)dz + ∫ S h(0|z)µY(z)dz]
= ∫ Sc h(1|z)µY (z)dz− ∫ S h(0|z)µY(z)dz
≥t( ∫ Sc h(1|z)µX(z)dz− ∫ S h(0|z)µX(z)dz)
=t([ ∫ Sc h(1|z)µX(z)dz + ∫ S h(1|z)µX(z)dz]− [ ∫ S h(0|z)µX(z)dz + ∫ Sc h(1|z)µX(z)dz]) =t(P(h(X) = 1)− P(X ∈ S)) ≥0
(4)
The first inequality holds due to the construction of mixed definition S. If z ∈ S, µY (z)µX(z) ≥ t. If z ∈ Sc, µY (z)µX(z) ≤ t. Compared with continuous set, the only difference appears in the equal sign.
It should be noted that P(X ∈ S) and P(Y ∈ S) should keep consistent, which means that they should have the same S′. In this derivation, we can find that P (X ∈ S) and P (Y ∈ S) use the same set S′ in Eqn. (4).
Next, we will plug in the condition that X and Y are isotropic Laplaces.
Then we just need to prove that{ z ∈ Rd : µY (z) µX(z) ≤ t } ⇐⇒ { z ∈ Rd : ‖z − δ‖1 − ‖z‖1 ≥ β } When X and Y are isotropic Laplaces, the likelihood ratio turns out to be:
µY (z) µX(z) = exp(− 1λ‖z − δ‖1) exp(− 1λ‖z‖1)
= exp(− 1 λ (‖z − δ‖1 − ‖z‖1))
By choosing β = −λ log(t), we can derive that
‖z − δ‖1 − ‖z‖1 ≥ β ⇐⇒ µY (z)
µX(z) ≤ t
‖z − δ‖1 − ‖z‖1 ≤ β ⇐⇒ µY (z)
µX(z) ≥ t
B PROOF OF THEOREM 1
Theorem 1(restated) Let f : Rd → Y be deterministic or random function, and let ∼ L(λ). Let g(x) = arg maxc P (f(x+ ) = c). Suppose PA, PB ∈ [0, 1] are such that
P (f(x+ ) = cA) ≥ PA ≥ PB ≥ max c 6=cA P(f(x+ ) = c)
Then g(x+ δ) = g(x),∀‖δ‖1 ≤ R, where
R = max
{ λ
2 log(PA/PB),−λ log(1− PA + PB)
} (5)
Proof of Theorem 1 Denote T (x) = ‖x − δ‖1 − ‖x‖1. Use Triangle Inequality we can derive a bound for T (x):
− ‖δ‖1 ≤ T (x) ≤ ‖δ‖1 (6)
Pick β1, β2 such that there exists A′ ⊆ {z : T (z) = β1}, B′ ⊆ {z : T (z) = β2}, and P(X ∈ {z : T (z) > β1} ∪A′) = PA ≤ P(f(X) = cA)) P(X ∈ {z : T (z) < β2} ∪B′) = PB ≥ P(f(X) = cB) Define A := {z : T (z) > β1} ∪A′
B := {z : T (z) < β2} ∪B′
Thus, apply Lemma 3.1, we have P(Y ∈ A) ≤ P(f(Y ) = cA) P(Y ∈ B) ≥ P(f(Y ) = cB)
(7)
Then consider P(Y ∈ A) and P(Y ∈ B) P(Y ∈ A) = ∫ A [2λ] −d exp(−‖x− δ‖1 λ )dx
= ∫ A [2λ] −d exp(−‖x‖1 λ ) exp(−T (x) λ )dx ≥ exp(−‖δ‖1 λ ) ∫ A [2λ] −d exp(−‖x‖1 λ )dx = exp(−‖δ‖1 λ )PA
(8)
The inequality is derived by Eqn.( 6). Similarly, we can get
P(Y ∈ B) = ∫ B [2λ] −d exp(−‖x− δ‖1 λ )dx
= ∫ B [2λ] −d exp(−‖x‖1 λ ) exp(−T (x) λ )dx ≤ exp(‖δ‖1 λ ) ∫ B [2λ] −d exp(−‖x‖1 λ )dx = exp( ‖δ‖1 λ )PB
(9)
First, we would like to show that robustness can be guaranteed when R ≤ λ2 log(PA/PB).
If ‖δ‖1 ≤ λ2 log(PA/PB), by Eqn. (7, 8, 9), we have P(f(Y ) = cA) ≥ P(Y ∈ A) ≥ P(Y ∈ B) ≥ P(f(Y ) = cB)
Then, we would like to show that robustness can be guaranteed whenR ≤ −λ log(1−PA+PB).
From Eqn. (9), we know that P(Y ∈ B) ≤ exp(‖δ‖1λ )PB . Besides, by applying Eqn. (9) in set Ac, we can get that P(Y ∈ A) ≥ 1 − exp(‖δ‖1λ )(1 − PA). So we can calculate that if ‖δ‖1 ≤ −λ log(1− PA + PB), we have
P(f(Y ) = cA) ≥ P(Y ∈ A) ≥ P(Y ∈ B) ≥ P(f(Y ) = cB)
Moreover, by simple algebraic operation, we can derive that−λ log(1−PA+PB) ≥ λ2 log(PA/PB) requires 1−2PA(1−PA)− √ 1−4PA(1−PA)
2PA ≤ PB ≤
1−2PA(1−PA)+ √
1−4PA(1−PA) 2PA .
The proof for Theorem 1 is finished.
C PROOF OF THEOREM 2 AND THEOREM 3
Theorem 2(restated) (binary case) Let f : Rd → Y be deterministic or random function, and let ∼ L(λ). Let g(x) = arg maxc P (f(x+ ) = c). Suppose there are only two classes cA and cB , and PA ∈ [ 12 , 1] s.t. P (f(x+ ) = cA) ≥ PA Then g(x+ δ) = g(x),∀‖δ‖1 ≤ R, for
R = −λ log[2(1− PA)] (10)
Proof of Theorem 2:
It is similar to the proof of Theorem 1. Pick β3 such that there exists B′ ⊆ {z : T (z) = β3}, and
P(X ∈ {z : T (z) < β3} ∪B′) = PB = P(f(X) = cB) Define
S := {z : T (z) < β3} ∪B′
So we also have P(X 6∈ S) = PA = P(f(X) = cA). Plug into Lemma 3.1, we can get P(Y 6∈ S) ≤ P(f(Y ) = cA) P(Y ∈ S) ≥ P(f(Y ) = cB)
Using a similar method as Eqn. (9), we can get that
P(Y ∈ S) ≤ exp(‖δ‖1 λ )PB
Since we have PB = P(f(X) = cB) = 1− PA ≤ 1− PA
Thus, if ‖δ‖1 ≤ R = −λ log[2(1− PA)], it holds that
P(Y ∈ S) ≤ exp(‖δ‖1 λ )PB
≤ exp(−λ log[2(1− PA)] λ )(1− PA) = 1
2
That is to say, P(f(Y ) = cA) ≥ P(Y 6∈ S) ≥ 12 ≥ P(Y ∈ S) ≥ P(f(Y ) = cB). The proof for Theorem 2 is finished.
Theorem 3(restated) (tight bound in binary case) In the same setting as Theorem 2, assume PA + PB ≤ 1 and PA ≥ 12 . ∀R
′ > −λ log[2(1 − PA)], ∃ base classifier f∗ and perturbation δ∗ with g∗(x) = arg maxc P (f ∗(x+ ) = c) and ‖δ‖1 = R′, s.t. g∗(x) 6= g∗(x+ δ∗).
Proof of Theorem 3: Here, we first set δ = (‖δ‖1, 0, . . . , 0). For simplification, we denote δ = ‖δ‖1.And define
A := { z : |z − δ| − |z| ≥ max{δ + 2λ log [ 2 ( 1− PA )] ,−δ} } Then, we can calculate that
P(X ∈ A) = Px(|x− δ| − |x| ≥ max{δ + 2λ log[2(1− PA)],−δ})
= ∫ −λ log[2(1−PA)] −∞ 1 2λ exp (−|x| λ )dx
= 1− ∫ ∞ −λ log[2(1−PA)] 1 2λ exp ( x λ )dx
= PA
(11)
where x ∼ 12λ exp (− |x| λ ), δ = ‖δ‖1 . Notice that if δ + 2λ log[2(1 − PA)] ≤ −δ, we will get the integral equation by choosing S′. With Eqn. (11), we have
P(X ∈ A) = PA ≤ P(f(X) = cA) (12)
Thus, plug Eqn. (12) into the results of Lem. 3.1, we have
P(Y ∈ A) ≤ P(f(Y ) = cA) (13)
Also, since Y = X + δ, it can be derived that P(Y ∈ A) = ∫ −λ log[2(1−PA)]−δ −∞ 1 2λ exp (−|x| λ )dx (14)
Here we use the consistency of X ∈ A and Y ∈ A. Since Y can be regarded as an offset of X , the integral limit should be translated into the same length. So, if ‖δ‖1 = δ ≤ −λ log[2(1 − PA)], by Eqn. (7) and Eqn. (14), we have
P(f(Y ) = cA) ≥ P(Y ∈ A) ≥ 1
2
This means that the results we get in binary case is a tight bound, and the worst-case δ appears when δ = (δ, 0, . . . , 0). Furthermore, if we slightly enlarge δ, there would be a case that the robustness is destroyed.
The proof for Theorem 3 is finished.
D WHY LAPLACE NOISE INSTEAD OF GAUSSIAN
In this section, we theoretically analyze the certification capabilities of Gaussian and Laplace noises. We will show that, given the same base classifier f the parameter of Laplace distributions λ is less sensitive than the parameter of Gaussian distributions σ. Given a base classifier f , where
f(x) = { cA |x| ≤ 1 cB o.w.
and two random smoothing functions
g1(x) = arg max c
P(f(x+ ) = c), ∼ L(0, λ),
g2(x) = arg max c
P(f(x+ ) = c), ∼ N (0, σ2),
we aim to prove that Laplace noises will better protect the original prediction than Gaussian noises. Formally, we compare their Rectified Optional Parameter Space (ROPS), defined as Λ = { √ 2λ : g1(x;λ) = f(x)} and Σ = {σ : g2(x;σ) = f(x)}. Note that the rectified term √
2 is due to the fact that σ = √ 2λ yield the same variance. Essentially, ROPS indicates the feasible region where the smoothing distribution does not negatively impact the base classifier, thus measuring the sensitivity of the smoothing distribution (the larger the better).
First, we would like to compare its prediction on a given point (x, f(x)) = (0, cA). We have
g1(0) = cA ⇐⇒ P(f(0 + ) = cA) ≥ 1 2 ⇐⇒ P(| | ≤ 1) = 1− exp(− 1 λ ) ≥ 1 2 ⇐⇒ λ ≤ 1 log 2 ,
g2(0) = cA ⇐⇒ P(f(0 + ) = cA) ≥ 1 2 ⇐⇒ P(| | ≤ 1) = 2Φ( 1 σ )− 1 ≥ 1 2 ⇐⇒ σ ≤ 1 Φ−1(3/4) .
Since √
2 log 2 > 1 Φ−1(3/4) , Laplace noises have a larger ROPS than Gaussian noises at the point x = 0.
The analysis can be further extended in two cases.
First, if we have x 6= 0, what is the corresponding ROPS that leads to the desired result (g(x) = f(x))? We show in Fig. 10 that we will have a larger ROPS under Laplace noises.
Second, if we have a fixed x but fixed a desired certified radius, what is the corresponding ROPS? We show in Fig. 11 that Laplace noises again have a larger ROPS.
We empirically validate this finding with ResNet110 on CIFAR-10. The resulting smoothed model has 24.8% clean accuracy under a Laplace noise, and 23.7% clean accuracy under a Gaussian noise (with the same variance as the Laplace noise). Here the accuracy is computed with respect to predictions of the base classifier instead of the labels (to illustrate how the smoothing impacts the predictions). | 1. What is the focus of the paper regarding random smoothing techniques?
2. What are the strengths and weaknesses of the proposed method compared to prior works?
3. How does the reviewer assess the significance and novelty of the paper's contributions?
4. Are there any suggestions for improving the paper or expanding its scope?
5. What is the final decision on the paper's acceptance or rejection? | Review | Review
The paper provides a random smoothing technique for L1 perturbation and proves the tightness results for binary classification case. Overall, there are some new results in this paper -- establishing a new certificate bounds for L1 perturbation model. However, I have several concerns about whether this contribution is significant enough:
Random smoothing has been studied extensively recently and the proof technique in this paper is not so different from previous papers (Cohen et al, Li et al). Also, there were L0 perturbation bounds proposed by (Leet et al). Therefore, although I agree that a tighter certified bound compared to (Lecuyer et al) is new, the paper seems to be a bit incremental. It will be more interesting to see if the proposed technique/theorem can be used for a wider range of norms.
Also, it may be more interesting to add some discussions about why L1 perturbation is important for image classification (is it more human-imperceptible?)
=======
I have checked the rebuttal and other reviewers' comments. Although there are interesting components in this paper, I do agree that the paper is incremental given that many random smoothing methods have been proposed recently for L2, L_infty norms. Therefore I think this is a borderline case and will be ok with rejection. |
ICLR | Title
$\ell_1$ Adversarial Robustness Certificates: a Randomized Smoothing Approach
Abstract
Robustness is an important property to guarantee the security of machine learning models. It has recently been demonstrated that strong robustness certificates can be obtained on ensemble classifiers generated by input randomization. However, tight robustness certificates are only known for symmetric norms including `0 and `2, while for asymmetric norms like `1, the existing techniques do not apply. By converting the likelihood ratio into a one dimensional mixed random variable, we derive the first tight `1 robustness certificate under isotropic Laplace distributions in binary case. Empirically, the deep networks smoothed by Laplace distributions yield the state-of-the-art certified robustness in `1 norm on CIFAR-10 and ImageNet.
1 INTRODUCTION
have done a series of nice works in practical sights or theoretical sights (Zheng et al., 2016; Gouk et al., 2018). Among them, certifiably robustness is valuable, since it can withstand all attacks within a norm ball and has a nice theoretical and practical outcome. However, most work cannot deal with the case for general neural networks.
Deep networks are flexible models that are widely adopted in various applications. However, it has been shown that such models are vulnerable against adversary (Szegedy et al., 2014). Concretely, an unnoticeable small perturbation on the input can cause a typical deep model to change predictions arbitrarily. The phenomenon raises the concerns of the security of deep models, and hinders its deployment in decision-critical applications. Indeed, the certification of robustness is a pre-requisite when AI-generated decisions may have important consequences.
Certifying the robustness of a machine learning model is challenging, especially for modern deep learning models that are over-parameterized and effectively black-box. Hence, the existing approaches mainly rely on empirical demonstration against specific adversarial attack algorithms (Goodfellow et al., 2015; Madry et al., 2018; Finlay et al., 2019). However, this line of works can give a false sense of security. Indeed, successful defense against the existing attack algorithms does not guarantee actual robustness against any adversaries that may appear in the future.
Recently, the adversarial robustness community has shifted the focus towards establishing certificates that prove the robustness of deep learning models. The certificate can be either exact or conservative, so long as the certified region cannot exhibit any adversarial examples. Given the over-parameterized deep models and modern high-dimensional datasets, scalability becomes a key property for the certification algorithms, as many methods are computationally intractable.
Our work is based on the novel modeling scheme that generates ensembles of a fixed black-box classifier based on input randomization (Cohen et al., 2019). Under this framework, tight robustness certificates can be obtained with only the ensemble prediction values and randomization parameters. Given appropriate choices of distributions, the robustness guarantee can be derived for `2 or `0 norms (Cohen et al., 2019; Lee et al., 2019). The tightness simply implies that any point outside the certified region is an adversarial example in the worst case. However, the derivations of the previous results heavily relies on the fact that the target norm (`2 or `0) is symmetric, therefore analyzing any perturbation direction for attacking the model gives the same certification guarantee.
In contrast, `1 norm is asymmetric. That is, for a given `1 ball centered at the origin, if we move another `1 ball also from the origin by a distance δ, where ‖δ‖1 is fixed, then the overlapped region
between the two `1 balls may have different shapes and sizes (See Figure 1). The characterization of this overlapped region is the key step for proving tight certificates, hence the existing techniques do not apply for `1 norm.
In this work, we derive a tight `1 robustness guarantee under isotropic Laplace distributions. The Laplace distribution can be interpreted as an infinite mixture of uniform distributions over `1-norm balls, which is a natural “conjugate” distribution for `1 norm. Due to asymmetry, we first identified the tight robustness certificate for attacking the model in one particular direction, δ = (‖δ‖1, 0, · · · , 0). To show that other perturbation directions cannot lead to worse results, we convert the d dimensional likelihood function into an one dimensional function, where we apply relaxation for various δ and show that the worst case result is bounded by the specific direction (‖δ‖1, 0, · · · , 0). Theoretically, our certificate is tight in the binary classification setting. In the multi-class classification setting, our certificate is always tighter than the previous certificate proposed by Lecuyer et al. (2019). The theoretical improvement always leads to superior empirical results on certifying the same model, where we demonstrate the result on CIFAR-10 and ImageNet with ResNet models. Moreover, the proposed robustness certificate on models smoothed by Laplace distributions also outperforms the same models trained and certified using Gaussian distributions (Cohen et al., 2019) in `1 certified robustness, where the Gaussian-based robustness certificate is adapted from `2 norm.
2 RELATED WORK
Robustness of a model can be defined in various aspects. For example, Feynman-Kac Formalism can be used to improve robustness (Wang et al., 2018). In this paper, we focus on the classification setting, where the goal is to provide guarantee of a constant prediction among a small region specified via some metric. The robustness certificate can be either exact or conservative, so long as a constant prediction is guaranteed in the certified region. Note that the certification of a completely black-box model requires checking the prediction values at every point around the point of interest, which is clearly infeasible. A practical certification algorithm inevitably has to specify and leverage the functional structure of the classifier in use to reduce the required computation.
Exact certificates. The exact certificate of deep networks is typically derived for the networks with a piecewise linear activation function such as ReLU. Such networks have an equivalent mixed integer linear representation (Cheng et al., 2017; Lomuscio & Maganti, 2017; Dutta et al., 2017; Bunel et al., 2018). Hence, one may apply mixed integer linear programming to find the worst case adversary within any convex polyhedron such as an `1-ball or `∞-ball. Despite the elegant solution, the complexity is, in general, NP-hard and the algorithms are not scalable to large problems(Tjeng et al., 2017).
Conservative certificates. A conservative certificate can be more scalable than the exact methods, since one can trade-off the accuracy of certification with efficiency (Gouk et al., 2018; Tsuzuku et al., 2018; Cisse et al., 2017; Anil et al., 2018; Hein & Andriushchenko, 2017). For example, one can relax the search of the worst case adversary as a simpler optimization problem that only bounds the effect of such adversary. Alternatively, people also consider the robustness problem in a modular way, where the robustness guarantee can be derived iteratively for each layer in the deep networks by considering the feasible values for each hidden layer (Gowal et al., 2018; Weng et al., 2018; Zhang et al., 2018; Mirman et al., 2018; Singh et al., 2018). However, this line of works have not yet been demonstrated to be feasible to realistic networks in high dimensional problems like ImageNet.
Randomized smoothing. Randomized smoothing has been proved to be closely related to robustness. Although similar techniques have been tried by (Liu et al., 2018; Cao & Gong, 2017), no corresponding proofs have been given; Li et al. (2018) and Cohen et al. (2019) have proved certified robustness of `2 norm under isotropic Gaussian noise, and Lee et al. (2019) proved robustness for `0 form. Lecuyer et al. (2019) use techniques from differential privacy to prove `1 robustness under Gaussian and Laplace noise respectively, but the bounds are not tight. Li et al. (2018); Pinot et al. (2019) use Rényi divergence framework without tightness proof. Our results synthesize the ideas in (Cohen et al., 2019; Lee et al., 2019; Lecuyer et al., 2019; Li et al., 2018; Pinot et al., 2019) and prove the tight robustness radius under the binary classification setting.
3 PRELIMINARIES
Definition 1 (Laplace distribution) Given λ ∈ R+, d ∈ Z+, we use L(λ) to denote the Laplace distribution in dimension d with parameter λ. The p.d.f. of L(λ) is denoted as L(x;λ) ,
1 (2λ)d exp(−‖x‖1λ ).
As we will see in Lemma 3.1, in smoothing analysis, we are interested in the likelihood ratio of two random variables X = and Y = δ + (here ∼ L(λ) and δ ∈ Rd is a fixed vector). Specifically,
µY (x) µX(x) = exp ( − 1 λ (‖x− δ‖1 − ‖x‖1) )
Therefore, the likelihood ratio between two d dimensional random variables is controlled by a one dimensional random variable T (x) , ‖x − δ‖1 − ‖x‖1, where x ∼ L(λ). This transformation is crucial in our analysis, and it is easy to see that T (x) is a mixed random variable, since Px(T (x) = ‖δ‖1) > 0. In our analysis, we need to calculate the inverse of c.d.f. of T (x). However, since T (x) is a mixed random variable, sometimes the inverse may not exist. See Figure 3 for illustration, where the inverse of the probability 0.85 does not exist. To deal with this case, we have the following modified version of Neyman-Pearson Lemma, with the proof in Appendix A.
Lemma 3.1 (Neyman-Pearson Lemma for mixed random variables). Let X ∼ L (λ) and Y ∼ L (λ) + δ. Let h : Rd → {0, 1} be any deterministic or random function. Given any β ∈ R, and S′ ⊆ { z ∈ Rd : ‖z − δ‖1 − ‖z‖1 = β } :
1. If S = { z ∈ Rd : ‖z − δ‖1 − ‖z‖1 > β } ∪ S′, and P(h(X) = 1) ≥ P(X ∈ S) then P(h(Y ) = 1) ≥ P(Y ∈ S)
2. If S = { z ∈ Rd : ‖z − δ‖1 − ‖z‖1 < β } ∪S′, and P(h(X) = 1) ≤ P(X ∈ S), then P(h(Y ) = 1) ≤ P(Y ∈ S)
4 MAIN RESULTS
In this paper, we apply the randomized smoothing technique (Cohen et al., 2019) for getting robustness certificates, which works as follows. Given an input x, we perturb it with , s.t. ∼ L(λ). Then instead of evaluating the robustness of the original function f(x), we evaluate g(x) , arg maxc P (f(x+ ) = c), which is effectively the smoothed version of f(x).
4.1 ROBUSTNESS CERTIFICATES FOR GENERAL CASES
Our first theorem proves that for the smoothed classifier g, and a given input x, there always exists a robust radius R, such that any perturbation δ s.t. ‖δ‖1 ≤ R, does not alter the prediction of g(x).
Theorem 1 Let f : Rd → Y be deterministic or random function, and let ∼ L(λ). Let g(x) = arg maxc P (f(x+ ) = c). Suppose PA, PB ∈ [0, 1] are such that
P (f(x+ ) = cA) ≥ PA ≥ PB ≥ max c 6=cA P(f(x+ ) = c)
Then g(x+ δ) = g(x),∀‖δ‖1 ≤ R, where
R = max
{ λ
2 log(PA/PB),−λ log(1− PA + PB)
} (1)
Some Remarks:
1. When PA → 1 or PB → 0, we can get R → ∞. It is reasonable since the Laplace distribution is supported over Rd, PA → 1 is equivalent to f = cA almost everywhere.
2. Compared with (Lecuyer et al., 2019) where they have R = λ2 log(PA/PB), our bound
is better if 1−2PA(1−PA)−
√ 1−4PA(1−PA)
2PA ≤ PB ≤
1−2PA(1−PA)+ √
1−4PA(1−PA) 2PA . See
Figure 4 for illustration, where we use baseline to denote the bound R = λ2 log(PA/PB).
Proof sketch: (The full proof is in Appendix B) For arbitrarily classifier f , we can transform it into a random smoothing classifier g using random smoothing technique, where g returns class cA with probability no less than PA, and class cB with probability no more than PB . Below we list the three main ideas we used in our proof:
1. How to deal with an arbitrary f with PA and PB?
Following Cohen et al. (2019), we use Neyman-Pearson Lemma to transform the relation between P(f(X) = cA) and P(f(Y ) = cA) into the relation between P(X ∈ A) and P(Y ∈ A). From Lemma 3.1, Neyman-Pearson Lemma still holds for mixed random variables.
2. How to deal with the relation between X = and Y = + δ?
Inspired by Lecuyer et al. (2019), we use the DP-form inequality (P (Y ∈ A) ≤ e P (X ∈ A)) to deal with the relation between P (X ∈ A) and P (Y ∈ A). In Laplace distribution, = ‖δ‖1λ .
3. Take complements to get tighter bound.
When PA or B < 1/2, the above DP-form inequality gets tighter. Therefore, we analyze Ac when PA ≥ 1/2 to get a new bound, and compare it with the baseline expression. We derive this bound by Neyman-Pearson Lemma in this work, but an alternative approach is using Rényi Divergence (Li et al., 2018).
4.2 TIGHT ROBUSTNESS CERTIFICATES FOR BINARY CASE
Although we get improved result over Lecuyer et al. (2019), the bound in Theorem 1 is not tight since it considers the general case with multiple categories. In this section, we first present our result for binary classification (Theorem 2), which further improves over Theorem 1.
Theorem 2 (binary case) Let f : Rd → Y be deterministic or random function, and let ∼ L(λ). Let g(x) = arg maxc P (f(x + ) = c). Suppose there are only two classes cA and cB , and PA ∈ [ 12 , 1] s.t. P (f(x+ ) = cA) ≥ PA Then g(x+ δ) = g(x),∀‖δ‖1 ≤ R, for
R = −λ log[2(1− PA)] (2)
Scretch of the proof: (The full proof is in Appendix C) Theorem 2 is a special binary case of Theorem 1. We can use a method similar to Theorem 1 to get the results. However, it is worth noting that in binary cases, our new improved bound in Theorem 1 always dominates the bound by Lecuyer et al. (2019). Moreover, our bound in Eqn. (2) is tight, as shown below.
Theorem 3 (tight bound in binary case) In the same setting as Theorem 2, assume PA + PB ≤ 1 and PA ≥ 12 . ∀R
′ > −λ log[2(1 − PA)], ∃ base classifier f∗ and perturbation δ∗ with g∗(x) = arg maxc P (f ∗(x+ ) = c) and ‖δ‖1 = R′, s.t. g∗(x) 6= g∗(x+ δ∗).
Scretch of the proof:(The full proof is in Appendix C) For Theorem 3, we prove that the bound in Theorem 2 is tight by calculating the results in one-dimensional case, where δ = (‖δ‖1, 0, . . . , 0). By calculating, we show that when δ = (‖δ‖1, 0, . . . , 0)
P(Y ∈ B) = ∫ ‖δ‖1+λ log[2PB ] −∞ 1 2λ exp (−|x| λ )dx
= { exp(‖δ‖1λ )PB when ‖δ‖1 ≤ −λ log[2PB ] 1− 1
4PB exp(−‖δ‖1λ ) o.w.
Therefore, when ‖δ‖1 ≤ −λ log[2PB ], the DP-inequality is tight. The worst-case δ appears in the one-dimension case.
Figure 5 shows the reason why the inequality is tight. When δ is small, for P(X ∈ B), the set B we selected satisfies ∀x ∈ B, T (x) = −‖δ‖1 (red part). When P(Y ∈ B) is considered, it moves set S towards left by step δ. However, due to the small δ, S after moving still satisfies the requirement of ∀x ∈ S, T (x) = −‖δ‖1 (blue part). Therefore, the inequality is tight.
4.3 METHOD COMPARISON
We compared our method with Cohen et al.’s and Lecuyer et al.’s in binary case, see Table 1. We plot the curves in Figure 6. As we can see, under the same variance of each noise, our method can reach better robustness radius. Below we show simple derivations of the bounds in Table 1.
Robustness radius of Lecuyer et al. (2019)
Using the basic inequality from differential privacy, we have:
P (f(X) = cA) ≤ exp(β)P (f(Y ) = cA) P (f(Y ) = cB) ≤ exp(β)P (f(X) = cB)
where β = ‖δ‖1/λ. The above two inequalities show that to guarantee P (f(Y ) = cA) > P (f(Y ) = cB), it suffices to show that:
P (f(X) = cA) > exp(2β)P (f(X) = cB)
So plug in β = ‖δ‖1/λ, we have ‖δ‖1 ≤ λ2 log(PA/PB). Furthermore, in binary case, we can plug in PB = 1− PA, and get the robustness radius: R = λ2 log(PA/1− PA). Robustness radius of Cohen et al. (2019)
Denote Bp,r(c) = {x : ‖x − c‖p ≤ r}. Since we know that B1,r(c) ⊂ B2,r(c), so the radius in (Cohen et al., 2019) can be directly used in `1 form, which is σΦ−1(PA).
Besides, since B1,r+ (c) 6⊂ B2,r(c) whatever > 0 is. And (Cohen et al., 2019) is an exact robustness guarantee, so we have that the best `1 form that isotropic Gaussian noise random smoothing can get is σΦ−1(PA).
Finally we will prove that −λ log[2(1 − PA)] ≥ λ2 log(PA/1− PA). For simple denotion, we just set PA = p ≥ 0.5. So it is sufficient to show that−λ log[2(1−p)] ≥ λ2 log(p/(1−p)). By applying exponential operation, it suffices to show that 12(1−p) ≥ √ p 1−p , which is simply p(1− p) ≤ 1 4 .
5 EXPERIMENTS
5.1 IMPLEMENTATION DETAILS
Monte Carlo. Since we cannot get the exact value of PA, we have to use Monte Carlo method to get the approximate value of PA. More specifically, we take multiple random samples from the Laplace distribution to estimate PA. One way to do it is grouping the samples and get PA using non-parametric estimation.
In our experiments, we applied two different types of training, as described below.
Type1-Training: The first method is intuitive, and was applied in (Cohen et al., 2019). In the training process, we add into inputs:
inputs = inputs + noise
where the noise is sampled from isotropic Laplace distribution.
Type2-Training: The second method was recently proposed by Salman et al. (2019). The idea is to use adversarial noise samples instead of the raw noise samples in a neighborhood to train the base classifier. Each training sample can be decomposed to
inputs = inputs + noise + perturbation
where the noise comes from an isotropic Laplace distribution, and the perturbation is found approximately by the gradient of loss with respect to the input. Concretely, if we denote the loss as L and the input as x, the perturbation ∆ can be calculated by ∆ = a ∗ sign(∇xL(θ, x, y)), where a is a constant.
Evaluation Index. In this paper, we choose certified accuracy as our evaluation index. Robustness certified accuracy at radius r refers to the proportion of correctly classified samples with at least robustness radius r. Specifically, if a group of samples with capacity n is {xi}, i = 1, 2, . . . , n, its corresponding certified robustness radius is Ri. An index xi represent if the sample is classified correctly. If the sample is correctly classified, xi = 1, otherwise xi = 0. For a given r, the corresponding robustness certified accuracy is defined as α = ∑n i=1 xi1(Ri ≥ r)/n, where 1(·) is an indicator function.
However, from Section 5.1 we know that we cannot calculate the exact robustness radius R, so we use its R̂ to approximate R, which leads to a “approximate robustness certified accuracy”(α̂), which is calculated by
α̂ = n∑ i=1 xiI(R̂i ≥ r)/n (3)
Cohen et al. (2019) demonstrates that when significance level of R̂ is small, the difference between these two quantities is negligible. In practice, we plot approximate certified accuracy α̂ as a function of radius r. From Eqn. (3), we know that α̂ is non-increasing w.r.t. r. And when r →∞, α̂→ 0. Hyperparameters. In our paper, we set all our hyperparameters following Cohen et al. (2019). Specifically, we set significance level to 0.001. n0 = 100 in Monte Carlo simulation (used to get bound for α̂) and n = 100, 000 in estimation part (used to estimate α̂). Moreover, we test three parameters in CIFAR-10 dataset and ImageNet dataset (σ = 0.25, 0.50, 1.00). Since (Cohen et al., 2019) use Gaussian noise and we use Laplace noise, they should have the same standard deviation during comparison. This requires that σ = √ 2λ.
5.2 EXPERIMENTAL RESULTS
Results on ImageNet and CIFAR-10. We applied random smoothing on CIFAR-10 (Krizhevsky (2009)) and ImageNet (Deng et al. (2009)) respectively. On each data set, we trained several random smoothing models with differential standard deviation σ for Laplace noise. In order to keep in line with Cohen et al.’s method and make a comparison, we select σ = 0.25, 0.50, 1.00 on CIFAR-10, and ImageNet, corresponding parameter λ = σ/ √ 2.
Figure 6 draws the certified accuracy achieved by smoothing with each sigma. For the ImageNet dataset, we only use the most basic training method (Type1 Training). For the CIFAR-10 data set, we use two training methods (Type 1 and Type 2 Training). We can see that the smaller sigma performs better when the radius is smaller. As the noise gets bigger, the accuracy becomes lower, but the robustness guarantee becomes higher. The dashed black line shows the empirical robust accuracy of an undefended classifier from Cohen et al. (2019).
Comparison with baseline.
We will show our comparison results in the following. Based on Table. 1, we will test our method on CIFAR-10 with the ResNet110 architecture as well as Type1 and Type2 training, and ImageNet with ResNet50 architecture as well as Type1 training. We will compare our results with (Cohen et al., 2019) and (Lecuyer et al., 2019) under the same standard deviation σ. For base classifiers, ours and Lecuyer et al.’s share the same base classifier with Laplace training noise, and Cohen et al.’s uses the base classifier with Gaussian training noise.
6 CONCLUSION
In this paper, we combine the inequality from differential privacy and the classic Neyman-Pearson Lemma to resolve the challenging asymmetry of `1 metric and the mixed discrete-continuous property of the likelihood ratios under isotropic Laplace distributions. In addition, by comparing the high-dimensional case with a special edge case, we prove the tight `1 robustness guarantee for binary classification problems, and obtain the state-of-the-art certified accuracy in large scale experiments.
The establishment of `1 certificate via Laplace distributions and the prior result of `2 certificate via Gaussian distributions may be extended to a generic theorem for a general `p norm robustness certificate via the associated realization of the generalized Gaussian distribution, where the aforementioned results are special cases of the general scheme. The introduction of the mixed random variable analysis and `p geometry analysis may serve as a valuable extension of existing works towards such general goal.
A PROOF OF LEMMA 1
In this section, we will prove that Neyman-Pearson Lemma holds with mixed random variable.
WLOG, x = 0, X ∼ L (λ) and Y ∼ L (λ) +δ. We will firstly introduce Neyman-Pearson Lemma, which plays an important role in our proof.
Lemma 3.1 (restated).LetX ∼ L (λ) and Y ∼ L (λ)+δ. Let h : Rd → {0, 1} be any deterministic or random function. Given any β ∈ R, and S′ ⊆ { z ∈ Rd : ‖z − δ‖1 − ‖z‖1 = β } :
1. If S = { z ∈ Rd : ‖z − δ‖1 − ‖z‖1 > β } ∪ S′, and P(h(X) = 1) ≥ P(X ∈ S) then P(h(Y ) = 1) ≥ P(Y ∈ S)
2. If S = { z ∈ Rd : ‖z − δ‖1 − ‖z‖1 < β } ∪S′, and P(h(X) = 1) ≤ P(X ∈ S), then P(h(Y ) = 1) ≤ P(Y ∈ S) Proof of Lemma 3.1 First, notice that P(X ∈ S) can be regarded as a mixed random variable. We want to prove that as long as we can choose a S′ that satisfies P(X ∈ S) ≤ P(h(X) = 1), Neyman-Pearson Lemma can always hold.
Let’s first see what happens in the proof of Neyman-Pearson Lemma. Notice that X and Y are continuous variables, but X ∈ S and Y ∈ S can be regarded as mixed continuous-discrete event. Then we can choose a reasonable S′ for X and Y . We will prove case 1 and the other one can be proved with similar method.
P(h(Y ) = 1)− P(Y ∈ S)
= ∫ Rd h(1|z)µY (z)dz− ∫ S µY(z)dz
=[ ∫ Sc h(1|z)µY (z)dz + ∫ S h(1|z)µY(z)dz]− [ ∫ S h(1|z)µY(z)dz + ∫ S h(0|z)µY(z)dz]
= ∫ Sc h(1|z)µY (z)dz− ∫ S h(0|z)µY(z)dz
≥t( ∫ Sc h(1|z)µX(z)dz− ∫ S h(0|z)µX(z)dz)
=t([ ∫ Sc h(1|z)µX(z)dz + ∫ S h(1|z)µX(z)dz]− [ ∫ S h(0|z)µX(z)dz + ∫ Sc h(1|z)µX(z)dz]) =t(P(h(X) = 1)− P(X ∈ S)) ≥0
(4)
The first inequality holds due to the construction of mixed definition S. If z ∈ S, µY (z)µX(z) ≥ t. If z ∈ Sc, µY (z)µX(z) ≤ t. Compared with continuous set, the only difference appears in the equal sign.
It should be noted that P(X ∈ S) and P(Y ∈ S) should keep consistent, which means that they should have the same S′. In this derivation, we can find that P (X ∈ S) and P (Y ∈ S) use the same set S′ in Eqn. (4).
Next, we will plug in the condition that X and Y are isotropic Laplaces.
Then we just need to prove that{ z ∈ Rd : µY (z) µX(z) ≤ t } ⇐⇒ { z ∈ Rd : ‖z − δ‖1 − ‖z‖1 ≥ β } When X and Y are isotropic Laplaces, the likelihood ratio turns out to be:
µY (z) µX(z) = exp(− 1λ‖z − δ‖1) exp(− 1λ‖z‖1)
= exp(− 1 λ (‖z − δ‖1 − ‖z‖1))
By choosing β = −λ log(t), we can derive that
‖z − δ‖1 − ‖z‖1 ≥ β ⇐⇒ µY (z)
µX(z) ≤ t
‖z − δ‖1 − ‖z‖1 ≤ β ⇐⇒ µY (z)
µX(z) ≥ t
B PROOF OF THEOREM 1
Theorem 1(restated) Let f : Rd → Y be deterministic or random function, and let ∼ L(λ). Let g(x) = arg maxc P (f(x+ ) = c). Suppose PA, PB ∈ [0, 1] are such that
P (f(x+ ) = cA) ≥ PA ≥ PB ≥ max c 6=cA P(f(x+ ) = c)
Then g(x+ δ) = g(x),∀‖δ‖1 ≤ R, where
R = max
{ λ
2 log(PA/PB),−λ log(1− PA + PB)
} (5)
Proof of Theorem 1 Denote T (x) = ‖x − δ‖1 − ‖x‖1. Use Triangle Inequality we can derive a bound for T (x):
− ‖δ‖1 ≤ T (x) ≤ ‖δ‖1 (6)
Pick β1, β2 such that there exists A′ ⊆ {z : T (z) = β1}, B′ ⊆ {z : T (z) = β2}, and P(X ∈ {z : T (z) > β1} ∪A′) = PA ≤ P(f(X) = cA)) P(X ∈ {z : T (z) < β2} ∪B′) = PB ≥ P(f(X) = cB) Define A := {z : T (z) > β1} ∪A′
B := {z : T (z) < β2} ∪B′
Thus, apply Lemma 3.1, we have P(Y ∈ A) ≤ P(f(Y ) = cA) P(Y ∈ B) ≥ P(f(Y ) = cB)
(7)
Then consider P(Y ∈ A) and P(Y ∈ B) P(Y ∈ A) = ∫ A [2λ] −d exp(−‖x− δ‖1 λ )dx
= ∫ A [2λ] −d exp(−‖x‖1 λ ) exp(−T (x) λ )dx ≥ exp(−‖δ‖1 λ ) ∫ A [2λ] −d exp(−‖x‖1 λ )dx = exp(−‖δ‖1 λ )PA
(8)
The inequality is derived by Eqn.( 6). Similarly, we can get
P(Y ∈ B) = ∫ B [2λ] −d exp(−‖x− δ‖1 λ )dx
= ∫ B [2λ] −d exp(−‖x‖1 λ ) exp(−T (x) λ )dx ≤ exp(‖δ‖1 λ ) ∫ B [2λ] −d exp(−‖x‖1 λ )dx = exp( ‖δ‖1 λ )PB
(9)
First, we would like to show that robustness can be guaranteed when R ≤ λ2 log(PA/PB).
If ‖δ‖1 ≤ λ2 log(PA/PB), by Eqn. (7, 8, 9), we have P(f(Y ) = cA) ≥ P(Y ∈ A) ≥ P(Y ∈ B) ≥ P(f(Y ) = cB)
Then, we would like to show that robustness can be guaranteed whenR ≤ −λ log(1−PA+PB).
From Eqn. (9), we know that P(Y ∈ B) ≤ exp(‖δ‖1λ )PB . Besides, by applying Eqn. (9) in set Ac, we can get that P(Y ∈ A) ≥ 1 − exp(‖δ‖1λ )(1 − PA). So we can calculate that if ‖δ‖1 ≤ −λ log(1− PA + PB), we have
P(f(Y ) = cA) ≥ P(Y ∈ A) ≥ P(Y ∈ B) ≥ P(f(Y ) = cB)
Moreover, by simple algebraic operation, we can derive that−λ log(1−PA+PB) ≥ λ2 log(PA/PB) requires 1−2PA(1−PA)− √ 1−4PA(1−PA)
2PA ≤ PB ≤
1−2PA(1−PA)+ √
1−4PA(1−PA) 2PA .
The proof for Theorem 1 is finished.
C PROOF OF THEOREM 2 AND THEOREM 3
Theorem 2(restated) (binary case) Let f : Rd → Y be deterministic or random function, and let ∼ L(λ). Let g(x) = arg maxc P (f(x+ ) = c). Suppose there are only two classes cA and cB , and PA ∈ [ 12 , 1] s.t. P (f(x+ ) = cA) ≥ PA Then g(x+ δ) = g(x),∀‖δ‖1 ≤ R, for
R = −λ log[2(1− PA)] (10)
Proof of Theorem 2:
It is similar to the proof of Theorem 1. Pick β3 such that there exists B′ ⊆ {z : T (z) = β3}, and
P(X ∈ {z : T (z) < β3} ∪B′) = PB = P(f(X) = cB) Define
S := {z : T (z) < β3} ∪B′
So we also have P(X 6∈ S) = PA = P(f(X) = cA). Plug into Lemma 3.1, we can get P(Y 6∈ S) ≤ P(f(Y ) = cA) P(Y ∈ S) ≥ P(f(Y ) = cB)
Using a similar method as Eqn. (9), we can get that
P(Y ∈ S) ≤ exp(‖δ‖1 λ )PB
Since we have PB = P(f(X) = cB) = 1− PA ≤ 1− PA
Thus, if ‖δ‖1 ≤ R = −λ log[2(1− PA)], it holds that
P(Y ∈ S) ≤ exp(‖δ‖1 λ )PB
≤ exp(−λ log[2(1− PA)] λ )(1− PA) = 1
2
That is to say, P(f(Y ) = cA) ≥ P(Y 6∈ S) ≥ 12 ≥ P(Y ∈ S) ≥ P(f(Y ) = cB). The proof for Theorem 2 is finished.
Theorem 3(restated) (tight bound in binary case) In the same setting as Theorem 2, assume PA + PB ≤ 1 and PA ≥ 12 . ∀R
′ > −λ log[2(1 − PA)], ∃ base classifier f∗ and perturbation δ∗ with g∗(x) = arg maxc P (f ∗(x+ ) = c) and ‖δ‖1 = R′, s.t. g∗(x) 6= g∗(x+ δ∗).
Proof of Theorem 3: Here, we first set δ = (‖δ‖1, 0, . . . , 0). For simplification, we denote δ = ‖δ‖1.And define
A := { z : |z − δ| − |z| ≥ max{δ + 2λ log [ 2 ( 1− PA )] ,−δ} } Then, we can calculate that
P(X ∈ A) = Px(|x− δ| − |x| ≥ max{δ + 2λ log[2(1− PA)],−δ})
= ∫ −λ log[2(1−PA)] −∞ 1 2λ exp (−|x| λ )dx
= 1− ∫ ∞ −λ log[2(1−PA)] 1 2λ exp ( x λ )dx
= PA
(11)
where x ∼ 12λ exp (− |x| λ ), δ = ‖δ‖1 . Notice that if δ + 2λ log[2(1 − PA)] ≤ −δ, we will get the integral equation by choosing S′. With Eqn. (11), we have
P(X ∈ A) = PA ≤ P(f(X) = cA) (12)
Thus, plug Eqn. (12) into the results of Lem. 3.1, we have
P(Y ∈ A) ≤ P(f(Y ) = cA) (13)
Also, since Y = X + δ, it can be derived that P(Y ∈ A) = ∫ −λ log[2(1−PA)]−δ −∞ 1 2λ exp (−|x| λ )dx (14)
Here we use the consistency of X ∈ A and Y ∈ A. Since Y can be regarded as an offset of X , the integral limit should be translated into the same length. So, if ‖δ‖1 = δ ≤ −λ log[2(1 − PA)], by Eqn. (7) and Eqn. (14), we have
P(f(Y ) = cA) ≥ P(Y ∈ A) ≥ 1
2
This means that the results we get in binary case is a tight bound, and the worst-case δ appears when δ = (δ, 0, . . . , 0). Furthermore, if we slightly enlarge δ, there would be a case that the robustness is destroyed.
The proof for Theorem 3 is finished.
D WHY LAPLACE NOISE INSTEAD OF GAUSSIAN
In this section, we theoretically analyze the certification capabilities of Gaussian and Laplace noises. We will show that, given the same base classifier f the parameter of Laplace distributions λ is less sensitive than the parameter of Gaussian distributions σ. Given a base classifier f , where
f(x) = { cA |x| ≤ 1 cB o.w.
and two random smoothing functions
g1(x) = arg max c
P(f(x+ ) = c), ∼ L(0, λ),
g2(x) = arg max c
P(f(x+ ) = c), ∼ N (0, σ2),
we aim to prove that Laplace noises will better protect the original prediction than Gaussian noises. Formally, we compare their Rectified Optional Parameter Space (ROPS), defined as Λ = { √ 2λ : g1(x;λ) = f(x)} and Σ = {σ : g2(x;σ) = f(x)}. Note that the rectified term √
2 is due to the fact that σ = √ 2λ yield the same variance. Essentially, ROPS indicates the feasible region where the smoothing distribution does not negatively impact the base classifier, thus measuring the sensitivity of the smoothing distribution (the larger the better).
First, we would like to compare its prediction on a given point (x, f(x)) = (0, cA). We have
g1(0) = cA ⇐⇒ P(f(0 + ) = cA) ≥ 1 2 ⇐⇒ P(| | ≤ 1) = 1− exp(− 1 λ ) ≥ 1 2 ⇐⇒ λ ≤ 1 log 2 ,
g2(0) = cA ⇐⇒ P(f(0 + ) = cA) ≥ 1 2 ⇐⇒ P(| | ≤ 1) = 2Φ( 1 σ )− 1 ≥ 1 2 ⇐⇒ σ ≤ 1 Φ−1(3/4) .
Since √
2 log 2 > 1 Φ−1(3/4) , Laplace noises have a larger ROPS than Gaussian noises at the point x = 0.
The analysis can be further extended in two cases.
First, if we have x 6= 0, what is the corresponding ROPS that leads to the desired result (g(x) = f(x))? We show in Fig. 10 that we will have a larger ROPS under Laplace noises.
Second, if we have a fixed x but fixed a desired certified radius, what is the corresponding ROPS? We show in Fig. 11 that Laplace noises again have a larger ROPS.
We empirically validate this finding with ResNet110 on CIFAR-10. The resulting smoothed model has 24.8% clean accuracy under a Laplace noise, and 23.7% clean accuracy under a Gaussian noise (with the same variance as the Laplace noise). Here the accuracy is computed with respect to predictions of the base classifier instead of the labels (to illustrate how the smoothing impacts the predictions). | 1. What are the main contributions and novelties of the paper regarding robustness certificates and smoothed deep networks?
2. What are the strengths and weaknesses of the paper's experimental results on certified robustness?
3. Do you have any concerns or suggestions regarding the paper's claims, assumptions, or theoretical analyses?
4. How does the reviewer assess the significance and impact of the paper compared to prior works, such as the work by Wang et al.?
5. Are there any potential limitations or areas for improvement in the paper's approach or methodology? | Review | Review
In this paper, the author derived a tight ell_1, which is not the symmetric norm, robustness certificates under isotropic Laplace distributions. Experimentally, the authors showed that the deep networks smoothed
by Laplace distributions yield the state-of-the-art certified robustness in ell_1 norm on the CIFAR-10
and ImageNet. To find the ell_1 certificate, the authors first identified the tight robustness certificate, for attacking the model in one particular direction, say the first direction. To show that any other perturbation directions cannot lead to a worse result, the authors convert the d dimensional likelihood function into a one-dimensional function, and the authors used relaxation for different perturbations and show that the worst-case result is bounded by the previously identified direction. However, I have the following concerns about this work:
1. Theoretically, the authors only showed the certificate is tight for binary classification. I would suggest
the author change their claim in the abstract.
2. What is M on page 3 which is used without definition after definition 1?
3. Can you give a concrete continuous probability distribution that leads to the scenario in Fig.~3?
4. Can you extend the analysis to a multi-class classification scenario?
5. Besides randomized smoothing on the input images, recently Wang et al showed that randomize the deep nets can
also improve the deep nets and they gave it a nice theoretical interpretation. Here is the reference: Bao Wang, Binjie Yuan, Zuoqiang Shi, Stanley J. Osher. ResNets Ensemble via the Feynman-Kac Formalism to Improve Natural and Robust Accuracies, arXiv:1811.10745, NeurIPS, 2019
Overall, since this work is a straightforward integration of some existing work, I think this
paper lack novelty. Please address the above questions in rebuttal. |
ICLR | Title
$\ell_1$ Adversarial Robustness Certificates: a Randomized Smoothing Approach
Abstract
Robustness is an important property to guarantee the security of machine learning models. It has recently been demonstrated that strong robustness certificates can be obtained on ensemble classifiers generated by input randomization. However, tight robustness certificates are only known for symmetric norms including `0 and `2, while for asymmetric norms like `1, the existing techniques do not apply. By converting the likelihood ratio into a one dimensional mixed random variable, we derive the first tight `1 robustness certificate under isotropic Laplace distributions in binary case. Empirically, the deep networks smoothed by Laplace distributions yield the state-of-the-art certified robustness in `1 norm on CIFAR-10 and ImageNet.
1 INTRODUCTION
have done a series of nice works in practical sights or theoretical sights (Zheng et al., 2016; Gouk et al., 2018). Among them, certifiably robustness is valuable, since it can withstand all attacks within a norm ball and has a nice theoretical and practical outcome. However, most work cannot deal with the case for general neural networks.
Deep networks are flexible models that are widely adopted in various applications. However, it has been shown that such models are vulnerable against adversary (Szegedy et al., 2014). Concretely, an unnoticeable small perturbation on the input can cause a typical deep model to change predictions arbitrarily. The phenomenon raises the concerns of the security of deep models, and hinders its deployment in decision-critical applications. Indeed, the certification of robustness is a pre-requisite when AI-generated decisions may have important consequences.
Certifying the robustness of a machine learning model is challenging, especially for modern deep learning models that are over-parameterized and effectively black-box. Hence, the existing approaches mainly rely on empirical demonstration against specific adversarial attack algorithms (Goodfellow et al., 2015; Madry et al., 2018; Finlay et al., 2019). However, this line of works can give a false sense of security. Indeed, successful defense against the existing attack algorithms does not guarantee actual robustness against any adversaries that may appear in the future.
Recently, the adversarial robustness community has shifted the focus towards establishing certificates that prove the robustness of deep learning models. The certificate can be either exact or conservative, so long as the certified region cannot exhibit any adversarial examples. Given the over-parameterized deep models and modern high-dimensional datasets, scalability becomes a key property for the certification algorithms, as many methods are computationally intractable.
Our work is based on the novel modeling scheme that generates ensembles of a fixed black-box classifier based on input randomization (Cohen et al., 2019). Under this framework, tight robustness certificates can be obtained with only the ensemble prediction values and randomization parameters. Given appropriate choices of distributions, the robustness guarantee can be derived for `2 or `0 norms (Cohen et al., 2019; Lee et al., 2019). The tightness simply implies that any point outside the certified region is an adversarial example in the worst case. However, the derivations of the previous results heavily relies on the fact that the target norm (`2 or `0) is symmetric, therefore analyzing any perturbation direction for attacking the model gives the same certification guarantee.
In contrast, `1 norm is asymmetric. That is, for a given `1 ball centered at the origin, if we move another `1 ball also from the origin by a distance δ, where ‖δ‖1 is fixed, then the overlapped region
between the two `1 balls may have different shapes and sizes (See Figure 1). The characterization of this overlapped region is the key step for proving tight certificates, hence the existing techniques do not apply for `1 norm.
In this work, we derive a tight `1 robustness guarantee under isotropic Laplace distributions. The Laplace distribution can be interpreted as an infinite mixture of uniform distributions over `1-norm balls, which is a natural “conjugate” distribution for `1 norm. Due to asymmetry, we first identified the tight robustness certificate for attacking the model in one particular direction, δ = (‖δ‖1, 0, · · · , 0). To show that other perturbation directions cannot lead to worse results, we convert the d dimensional likelihood function into an one dimensional function, where we apply relaxation for various δ and show that the worst case result is bounded by the specific direction (‖δ‖1, 0, · · · , 0). Theoretically, our certificate is tight in the binary classification setting. In the multi-class classification setting, our certificate is always tighter than the previous certificate proposed by Lecuyer et al. (2019). The theoretical improvement always leads to superior empirical results on certifying the same model, where we demonstrate the result on CIFAR-10 and ImageNet with ResNet models. Moreover, the proposed robustness certificate on models smoothed by Laplace distributions also outperforms the same models trained and certified using Gaussian distributions (Cohen et al., 2019) in `1 certified robustness, where the Gaussian-based robustness certificate is adapted from `2 norm.
2 RELATED WORK
Robustness of a model can be defined in various aspects. For example, Feynman-Kac Formalism can be used to improve robustness (Wang et al., 2018). In this paper, we focus on the classification setting, where the goal is to provide guarantee of a constant prediction among a small region specified via some metric. The robustness certificate can be either exact or conservative, so long as a constant prediction is guaranteed in the certified region. Note that the certification of a completely black-box model requires checking the prediction values at every point around the point of interest, which is clearly infeasible. A practical certification algorithm inevitably has to specify and leverage the functional structure of the classifier in use to reduce the required computation.
Exact certificates. The exact certificate of deep networks is typically derived for the networks with a piecewise linear activation function such as ReLU. Such networks have an equivalent mixed integer linear representation (Cheng et al., 2017; Lomuscio & Maganti, 2017; Dutta et al., 2017; Bunel et al., 2018). Hence, one may apply mixed integer linear programming to find the worst case adversary within any convex polyhedron such as an `1-ball or `∞-ball. Despite the elegant solution, the complexity is, in general, NP-hard and the algorithms are not scalable to large problems(Tjeng et al., 2017).
Conservative certificates. A conservative certificate can be more scalable than the exact methods, since one can trade-off the accuracy of certification with efficiency (Gouk et al., 2018; Tsuzuku et al., 2018; Cisse et al., 2017; Anil et al., 2018; Hein & Andriushchenko, 2017). For example, one can relax the search of the worst case adversary as a simpler optimization problem that only bounds the effect of such adversary. Alternatively, people also consider the robustness problem in a modular way, where the robustness guarantee can be derived iteratively for each layer in the deep networks by considering the feasible values for each hidden layer (Gowal et al., 2018; Weng et al., 2018; Zhang et al., 2018; Mirman et al., 2018; Singh et al., 2018). However, this line of works have not yet been demonstrated to be feasible to realistic networks in high dimensional problems like ImageNet.
Randomized smoothing. Randomized smoothing has been proved to be closely related to robustness. Although similar techniques have been tried by (Liu et al., 2018; Cao & Gong, 2017), no corresponding proofs have been given; Li et al. (2018) and Cohen et al. (2019) have proved certified robustness of `2 norm under isotropic Gaussian noise, and Lee et al. (2019) proved robustness for `0 form. Lecuyer et al. (2019) use techniques from differential privacy to prove `1 robustness under Gaussian and Laplace noise respectively, but the bounds are not tight. Li et al. (2018); Pinot et al. (2019) use Rényi divergence framework without tightness proof. Our results synthesize the ideas in (Cohen et al., 2019; Lee et al., 2019; Lecuyer et al., 2019; Li et al., 2018; Pinot et al., 2019) and prove the tight robustness radius under the binary classification setting.
3 PRELIMINARIES
Definition 1 (Laplace distribution) Given λ ∈ R+, d ∈ Z+, we use L(λ) to denote the Laplace distribution in dimension d with parameter λ. The p.d.f. of L(λ) is denoted as L(x;λ) ,
1 (2λ)d exp(−‖x‖1λ ).
As we will see in Lemma 3.1, in smoothing analysis, we are interested in the likelihood ratio of two random variables X = and Y = δ + (here ∼ L(λ) and δ ∈ Rd is a fixed vector). Specifically,
µY (x) µX(x) = exp ( − 1 λ (‖x− δ‖1 − ‖x‖1) )
Therefore, the likelihood ratio between two d dimensional random variables is controlled by a one dimensional random variable T (x) , ‖x − δ‖1 − ‖x‖1, where x ∼ L(λ). This transformation is crucial in our analysis, and it is easy to see that T (x) is a mixed random variable, since Px(T (x) = ‖δ‖1) > 0. In our analysis, we need to calculate the inverse of c.d.f. of T (x). However, since T (x) is a mixed random variable, sometimes the inverse may not exist. See Figure 3 for illustration, where the inverse of the probability 0.85 does not exist. To deal with this case, we have the following modified version of Neyman-Pearson Lemma, with the proof in Appendix A.
Lemma 3.1 (Neyman-Pearson Lemma for mixed random variables). Let X ∼ L (λ) and Y ∼ L (λ) + δ. Let h : Rd → {0, 1} be any deterministic or random function. Given any β ∈ R, and S′ ⊆ { z ∈ Rd : ‖z − δ‖1 − ‖z‖1 = β } :
1. If S = { z ∈ Rd : ‖z − δ‖1 − ‖z‖1 > β } ∪ S′, and P(h(X) = 1) ≥ P(X ∈ S) then P(h(Y ) = 1) ≥ P(Y ∈ S)
2. If S = { z ∈ Rd : ‖z − δ‖1 − ‖z‖1 < β } ∪S′, and P(h(X) = 1) ≤ P(X ∈ S), then P(h(Y ) = 1) ≤ P(Y ∈ S)
4 MAIN RESULTS
In this paper, we apply the randomized smoothing technique (Cohen et al., 2019) for getting robustness certificates, which works as follows. Given an input x, we perturb it with , s.t. ∼ L(λ). Then instead of evaluating the robustness of the original function f(x), we evaluate g(x) , arg maxc P (f(x+ ) = c), which is effectively the smoothed version of f(x).
4.1 ROBUSTNESS CERTIFICATES FOR GENERAL CASES
Our first theorem proves that for the smoothed classifier g, and a given input x, there always exists a robust radius R, such that any perturbation δ s.t. ‖δ‖1 ≤ R, does not alter the prediction of g(x).
Theorem 1 Let f : Rd → Y be deterministic or random function, and let ∼ L(λ). Let g(x) = arg maxc P (f(x+ ) = c). Suppose PA, PB ∈ [0, 1] are such that
P (f(x+ ) = cA) ≥ PA ≥ PB ≥ max c 6=cA P(f(x+ ) = c)
Then g(x+ δ) = g(x),∀‖δ‖1 ≤ R, where
R = max
{ λ
2 log(PA/PB),−λ log(1− PA + PB)
} (1)
Some Remarks:
1. When PA → 1 or PB → 0, we can get R → ∞. It is reasonable since the Laplace distribution is supported over Rd, PA → 1 is equivalent to f = cA almost everywhere.
2. Compared with (Lecuyer et al., 2019) where they have R = λ2 log(PA/PB), our bound
is better if 1−2PA(1−PA)−
√ 1−4PA(1−PA)
2PA ≤ PB ≤
1−2PA(1−PA)+ √
1−4PA(1−PA) 2PA . See
Figure 4 for illustration, where we use baseline to denote the bound R = λ2 log(PA/PB).
Proof sketch: (The full proof is in Appendix B) For arbitrarily classifier f , we can transform it into a random smoothing classifier g using random smoothing technique, where g returns class cA with probability no less than PA, and class cB with probability no more than PB . Below we list the three main ideas we used in our proof:
1. How to deal with an arbitrary f with PA and PB?
Following Cohen et al. (2019), we use Neyman-Pearson Lemma to transform the relation between P(f(X) = cA) and P(f(Y ) = cA) into the relation between P(X ∈ A) and P(Y ∈ A). From Lemma 3.1, Neyman-Pearson Lemma still holds for mixed random variables.
2. How to deal with the relation between X = and Y = + δ?
Inspired by Lecuyer et al. (2019), we use the DP-form inequality (P (Y ∈ A) ≤ e P (X ∈ A)) to deal with the relation between P (X ∈ A) and P (Y ∈ A). In Laplace distribution, = ‖δ‖1λ .
3. Take complements to get tighter bound.
When PA or B < 1/2, the above DP-form inequality gets tighter. Therefore, we analyze Ac when PA ≥ 1/2 to get a new bound, and compare it with the baseline expression. We derive this bound by Neyman-Pearson Lemma in this work, but an alternative approach is using Rényi Divergence (Li et al., 2018).
4.2 TIGHT ROBUSTNESS CERTIFICATES FOR BINARY CASE
Although we get improved result over Lecuyer et al. (2019), the bound in Theorem 1 is not tight since it considers the general case with multiple categories. In this section, we first present our result for binary classification (Theorem 2), which further improves over Theorem 1.
Theorem 2 (binary case) Let f : Rd → Y be deterministic or random function, and let ∼ L(λ). Let g(x) = arg maxc P (f(x + ) = c). Suppose there are only two classes cA and cB , and PA ∈ [ 12 , 1] s.t. P (f(x+ ) = cA) ≥ PA Then g(x+ δ) = g(x),∀‖δ‖1 ≤ R, for
R = −λ log[2(1− PA)] (2)
Scretch of the proof: (The full proof is in Appendix C) Theorem 2 is a special binary case of Theorem 1. We can use a method similar to Theorem 1 to get the results. However, it is worth noting that in binary cases, our new improved bound in Theorem 1 always dominates the bound by Lecuyer et al. (2019). Moreover, our bound in Eqn. (2) is tight, as shown below.
Theorem 3 (tight bound in binary case) In the same setting as Theorem 2, assume PA + PB ≤ 1 and PA ≥ 12 . ∀R
′ > −λ log[2(1 − PA)], ∃ base classifier f∗ and perturbation δ∗ with g∗(x) = arg maxc P (f ∗(x+ ) = c) and ‖δ‖1 = R′, s.t. g∗(x) 6= g∗(x+ δ∗).
Scretch of the proof:(The full proof is in Appendix C) For Theorem 3, we prove that the bound in Theorem 2 is tight by calculating the results in one-dimensional case, where δ = (‖δ‖1, 0, . . . , 0). By calculating, we show that when δ = (‖δ‖1, 0, . . . , 0)
P(Y ∈ B) = ∫ ‖δ‖1+λ log[2PB ] −∞ 1 2λ exp (−|x| λ )dx
= { exp(‖δ‖1λ )PB when ‖δ‖1 ≤ −λ log[2PB ] 1− 1
4PB exp(−‖δ‖1λ ) o.w.
Therefore, when ‖δ‖1 ≤ −λ log[2PB ], the DP-inequality is tight. The worst-case δ appears in the one-dimension case.
Figure 5 shows the reason why the inequality is tight. When δ is small, for P(X ∈ B), the set B we selected satisfies ∀x ∈ B, T (x) = −‖δ‖1 (red part). When P(Y ∈ B) is considered, it moves set S towards left by step δ. However, due to the small δ, S after moving still satisfies the requirement of ∀x ∈ S, T (x) = −‖δ‖1 (blue part). Therefore, the inequality is tight.
4.3 METHOD COMPARISON
We compared our method with Cohen et al.’s and Lecuyer et al.’s in binary case, see Table 1. We plot the curves in Figure 6. As we can see, under the same variance of each noise, our method can reach better robustness radius. Below we show simple derivations of the bounds in Table 1.
Robustness radius of Lecuyer et al. (2019)
Using the basic inequality from differential privacy, we have:
P (f(X) = cA) ≤ exp(β)P (f(Y ) = cA) P (f(Y ) = cB) ≤ exp(β)P (f(X) = cB)
where β = ‖δ‖1/λ. The above two inequalities show that to guarantee P (f(Y ) = cA) > P (f(Y ) = cB), it suffices to show that:
P (f(X) = cA) > exp(2β)P (f(X) = cB)
So plug in β = ‖δ‖1/λ, we have ‖δ‖1 ≤ λ2 log(PA/PB). Furthermore, in binary case, we can plug in PB = 1− PA, and get the robustness radius: R = λ2 log(PA/1− PA). Robustness radius of Cohen et al. (2019)
Denote Bp,r(c) = {x : ‖x − c‖p ≤ r}. Since we know that B1,r(c) ⊂ B2,r(c), so the radius in (Cohen et al., 2019) can be directly used in `1 form, which is σΦ−1(PA).
Besides, since B1,r+ (c) 6⊂ B2,r(c) whatever > 0 is. And (Cohen et al., 2019) is an exact robustness guarantee, so we have that the best `1 form that isotropic Gaussian noise random smoothing can get is σΦ−1(PA).
Finally we will prove that −λ log[2(1 − PA)] ≥ λ2 log(PA/1− PA). For simple denotion, we just set PA = p ≥ 0.5. So it is sufficient to show that−λ log[2(1−p)] ≥ λ2 log(p/(1−p)). By applying exponential operation, it suffices to show that 12(1−p) ≥ √ p 1−p , which is simply p(1− p) ≤ 1 4 .
5 EXPERIMENTS
5.1 IMPLEMENTATION DETAILS
Monte Carlo. Since we cannot get the exact value of PA, we have to use Monte Carlo method to get the approximate value of PA. More specifically, we take multiple random samples from the Laplace distribution to estimate PA. One way to do it is grouping the samples and get PA using non-parametric estimation.
In our experiments, we applied two different types of training, as described below.
Type1-Training: The first method is intuitive, and was applied in (Cohen et al., 2019). In the training process, we add into inputs:
inputs = inputs + noise
where the noise is sampled from isotropic Laplace distribution.
Type2-Training: The second method was recently proposed by Salman et al. (2019). The idea is to use adversarial noise samples instead of the raw noise samples in a neighborhood to train the base classifier. Each training sample can be decomposed to
inputs = inputs + noise + perturbation
where the noise comes from an isotropic Laplace distribution, and the perturbation is found approximately by the gradient of loss with respect to the input. Concretely, if we denote the loss as L and the input as x, the perturbation ∆ can be calculated by ∆ = a ∗ sign(∇xL(θ, x, y)), where a is a constant.
Evaluation Index. In this paper, we choose certified accuracy as our evaluation index. Robustness certified accuracy at radius r refers to the proportion of correctly classified samples with at least robustness radius r. Specifically, if a group of samples with capacity n is {xi}, i = 1, 2, . . . , n, its corresponding certified robustness radius is Ri. An index xi represent if the sample is classified correctly. If the sample is correctly classified, xi = 1, otherwise xi = 0. For a given r, the corresponding robustness certified accuracy is defined as α = ∑n i=1 xi1(Ri ≥ r)/n, where 1(·) is an indicator function.
However, from Section 5.1 we know that we cannot calculate the exact robustness radius R, so we use its R̂ to approximate R, which leads to a “approximate robustness certified accuracy”(α̂), which is calculated by
α̂ = n∑ i=1 xiI(R̂i ≥ r)/n (3)
Cohen et al. (2019) demonstrates that when significance level of R̂ is small, the difference between these two quantities is negligible. In practice, we plot approximate certified accuracy α̂ as a function of radius r. From Eqn. (3), we know that α̂ is non-increasing w.r.t. r. And when r →∞, α̂→ 0. Hyperparameters. In our paper, we set all our hyperparameters following Cohen et al. (2019). Specifically, we set significance level to 0.001. n0 = 100 in Monte Carlo simulation (used to get bound for α̂) and n = 100, 000 in estimation part (used to estimate α̂). Moreover, we test three parameters in CIFAR-10 dataset and ImageNet dataset (σ = 0.25, 0.50, 1.00). Since (Cohen et al., 2019) use Gaussian noise and we use Laplace noise, they should have the same standard deviation during comparison. This requires that σ = √ 2λ.
5.2 EXPERIMENTAL RESULTS
Results on ImageNet and CIFAR-10. We applied random smoothing on CIFAR-10 (Krizhevsky (2009)) and ImageNet (Deng et al. (2009)) respectively. On each data set, we trained several random smoothing models with differential standard deviation σ for Laplace noise. In order to keep in line with Cohen et al.’s method and make a comparison, we select σ = 0.25, 0.50, 1.00 on CIFAR-10, and ImageNet, corresponding parameter λ = σ/ √ 2.
Figure 6 draws the certified accuracy achieved by smoothing with each sigma. For the ImageNet dataset, we only use the most basic training method (Type1 Training). For the CIFAR-10 data set, we use two training methods (Type 1 and Type 2 Training). We can see that the smaller sigma performs better when the radius is smaller. As the noise gets bigger, the accuracy becomes lower, but the robustness guarantee becomes higher. The dashed black line shows the empirical robust accuracy of an undefended classifier from Cohen et al. (2019).
Comparison with baseline.
We will show our comparison results in the following. Based on Table. 1, we will test our method on CIFAR-10 with the ResNet110 architecture as well as Type1 and Type2 training, and ImageNet with ResNet50 architecture as well as Type1 training. We will compare our results with (Cohen et al., 2019) and (Lecuyer et al., 2019) under the same standard deviation σ. For base classifiers, ours and Lecuyer et al.’s share the same base classifier with Laplace training noise, and Cohen et al.’s uses the base classifier with Gaussian training noise.
6 CONCLUSION
In this paper, we combine the inequality from differential privacy and the classic Neyman-Pearson Lemma to resolve the challenging asymmetry of `1 metric and the mixed discrete-continuous property of the likelihood ratios under isotropic Laplace distributions. In addition, by comparing the high-dimensional case with a special edge case, we prove the tight `1 robustness guarantee for binary classification problems, and obtain the state-of-the-art certified accuracy in large scale experiments.
The establishment of `1 certificate via Laplace distributions and the prior result of `2 certificate via Gaussian distributions may be extended to a generic theorem for a general `p norm robustness certificate via the associated realization of the generalized Gaussian distribution, where the aforementioned results are special cases of the general scheme. The introduction of the mixed random variable analysis and `p geometry analysis may serve as a valuable extension of existing works towards such general goal.
A PROOF OF LEMMA 1
In this section, we will prove that Neyman-Pearson Lemma holds with mixed random variable.
WLOG, x = 0, X ∼ L (λ) and Y ∼ L (λ) +δ. We will firstly introduce Neyman-Pearson Lemma, which plays an important role in our proof.
Lemma 3.1 (restated).LetX ∼ L (λ) and Y ∼ L (λ)+δ. Let h : Rd → {0, 1} be any deterministic or random function. Given any β ∈ R, and S′ ⊆ { z ∈ Rd : ‖z − δ‖1 − ‖z‖1 = β } :
1. If S = { z ∈ Rd : ‖z − δ‖1 − ‖z‖1 > β } ∪ S′, and P(h(X) = 1) ≥ P(X ∈ S) then P(h(Y ) = 1) ≥ P(Y ∈ S)
2. If S = { z ∈ Rd : ‖z − δ‖1 − ‖z‖1 < β } ∪S′, and P(h(X) = 1) ≤ P(X ∈ S), then P(h(Y ) = 1) ≤ P(Y ∈ S) Proof of Lemma 3.1 First, notice that P(X ∈ S) can be regarded as a mixed random variable. We want to prove that as long as we can choose a S′ that satisfies P(X ∈ S) ≤ P(h(X) = 1), Neyman-Pearson Lemma can always hold.
Let’s first see what happens in the proof of Neyman-Pearson Lemma. Notice that X and Y are continuous variables, but X ∈ S and Y ∈ S can be regarded as mixed continuous-discrete event. Then we can choose a reasonable S′ for X and Y . We will prove case 1 and the other one can be proved with similar method.
P(h(Y ) = 1)− P(Y ∈ S)
= ∫ Rd h(1|z)µY (z)dz− ∫ S µY(z)dz
=[ ∫ Sc h(1|z)µY (z)dz + ∫ S h(1|z)µY(z)dz]− [ ∫ S h(1|z)µY(z)dz + ∫ S h(0|z)µY(z)dz]
= ∫ Sc h(1|z)µY (z)dz− ∫ S h(0|z)µY(z)dz
≥t( ∫ Sc h(1|z)µX(z)dz− ∫ S h(0|z)µX(z)dz)
=t([ ∫ Sc h(1|z)µX(z)dz + ∫ S h(1|z)µX(z)dz]− [ ∫ S h(0|z)µX(z)dz + ∫ Sc h(1|z)µX(z)dz]) =t(P(h(X) = 1)− P(X ∈ S)) ≥0
(4)
The first inequality holds due to the construction of mixed definition S. If z ∈ S, µY (z)µX(z) ≥ t. If z ∈ Sc, µY (z)µX(z) ≤ t. Compared with continuous set, the only difference appears in the equal sign.
It should be noted that P(X ∈ S) and P(Y ∈ S) should keep consistent, which means that they should have the same S′. In this derivation, we can find that P (X ∈ S) and P (Y ∈ S) use the same set S′ in Eqn. (4).
Next, we will plug in the condition that X and Y are isotropic Laplaces.
Then we just need to prove that{ z ∈ Rd : µY (z) µX(z) ≤ t } ⇐⇒ { z ∈ Rd : ‖z − δ‖1 − ‖z‖1 ≥ β } When X and Y are isotropic Laplaces, the likelihood ratio turns out to be:
µY (z) µX(z) = exp(− 1λ‖z − δ‖1) exp(− 1λ‖z‖1)
= exp(− 1 λ (‖z − δ‖1 − ‖z‖1))
By choosing β = −λ log(t), we can derive that
‖z − δ‖1 − ‖z‖1 ≥ β ⇐⇒ µY (z)
µX(z) ≤ t
‖z − δ‖1 − ‖z‖1 ≤ β ⇐⇒ µY (z)
µX(z) ≥ t
B PROOF OF THEOREM 1
Theorem 1(restated) Let f : Rd → Y be deterministic or random function, and let ∼ L(λ). Let g(x) = arg maxc P (f(x+ ) = c). Suppose PA, PB ∈ [0, 1] are such that
P (f(x+ ) = cA) ≥ PA ≥ PB ≥ max c 6=cA P(f(x+ ) = c)
Then g(x+ δ) = g(x),∀‖δ‖1 ≤ R, where
R = max
{ λ
2 log(PA/PB),−λ log(1− PA + PB)
} (5)
Proof of Theorem 1 Denote T (x) = ‖x − δ‖1 − ‖x‖1. Use Triangle Inequality we can derive a bound for T (x):
− ‖δ‖1 ≤ T (x) ≤ ‖δ‖1 (6)
Pick β1, β2 such that there exists A′ ⊆ {z : T (z) = β1}, B′ ⊆ {z : T (z) = β2}, and P(X ∈ {z : T (z) > β1} ∪A′) = PA ≤ P(f(X) = cA)) P(X ∈ {z : T (z) < β2} ∪B′) = PB ≥ P(f(X) = cB) Define A := {z : T (z) > β1} ∪A′
B := {z : T (z) < β2} ∪B′
Thus, apply Lemma 3.1, we have P(Y ∈ A) ≤ P(f(Y ) = cA) P(Y ∈ B) ≥ P(f(Y ) = cB)
(7)
Then consider P(Y ∈ A) and P(Y ∈ B) P(Y ∈ A) = ∫ A [2λ] −d exp(−‖x− δ‖1 λ )dx
= ∫ A [2λ] −d exp(−‖x‖1 λ ) exp(−T (x) λ )dx ≥ exp(−‖δ‖1 λ ) ∫ A [2λ] −d exp(−‖x‖1 λ )dx = exp(−‖δ‖1 λ )PA
(8)
The inequality is derived by Eqn.( 6). Similarly, we can get
P(Y ∈ B) = ∫ B [2λ] −d exp(−‖x− δ‖1 λ )dx
= ∫ B [2λ] −d exp(−‖x‖1 λ ) exp(−T (x) λ )dx ≤ exp(‖δ‖1 λ ) ∫ B [2λ] −d exp(−‖x‖1 λ )dx = exp( ‖δ‖1 λ )PB
(9)
First, we would like to show that robustness can be guaranteed when R ≤ λ2 log(PA/PB).
If ‖δ‖1 ≤ λ2 log(PA/PB), by Eqn. (7, 8, 9), we have P(f(Y ) = cA) ≥ P(Y ∈ A) ≥ P(Y ∈ B) ≥ P(f(Y ) = cB)
Then, we would like to show that robustness can be guaranteed whenR ≤ −λ log(1−PA+PB).
From Eqn. (9), we know that P(Y ∈ B) ≤ exp(‖δ‖1λ )PB . Besides, by applying Eqn. (9) in set Ac, we can get that P(Y ∈ A) ≥ 1 − exp(‖δ‖1λ )(1 − PA). So we can calculate that if ‖δ‖1 ≤ −λ log(1− PA + PB), we have
P(f(Y ) = cA) ≥ P(Y ∈ A) ≥ P(Y ∈ B) ≥ P(f(Y ) = cB)
Moreover, by simple algebraic operation, we can derive that−λ log(1−PA+PB) ≥ λ2 log(PA/PB) requires 1−2PA(1−PA)− √ 1−4PA(1−PA)
2PA ≤ PB ≤
1−2PA(1−PA)+ √
1−4PA(1−PA) 2PA .
The proof for Theorem 1 is finished.
C PROOF OF THEOREM 2 AND THEOREM 3
Theorem 2(restated) (binary case) Let f : Rd → Y be deterministic or random function, and let ∼ L(λ). Let g(x) = arg maxc P (f(x+ ) = c). Suppose there are only two classes cA and cB , and PA ∈ [ 12 , 1] s.t. P (f(x+ ) = cA) ≥ PA Then g(x+ δ) = g(x),∀‖δ‖1 ≤ R, for
R = −λ log[2(1− PA)] (10)
Proof of Theorem 2:
It is similar to the proof of Theorem 1. Pick β3 such that there exists B′ ⊆ {z : T (z) = β3}, and
P(X ∈ {z : T (z) < β3} ∪B′) = PB = P(f(X) = cB) Define
S := {z : T (z) < β3} ∪B′
So we also have P(X 6∈ S) = PA = P(f(X) = cA). Plug into Lemma 3.1, we can get P(Y 6∈ S) ≤ P(f(Y ) = cA) P(Y ∈ S) ≥ P(f(Y ) = cB)
Using a similar method as Eqn. (9), we can get that
P(Y ∈ S) ≤ exp(‖δ‖1 λ )PB
Since we have PB = P(f(X) = cB) = 1− PA ≤ 1− PA
Thus, if ‖δ‖1 ≤ R = −λ log[2(1− PA)], it holds that
P(Y ∈ S) ≤ exp(‖δ‖1 λ )PB
≤ exp(−λ log[2(1− PA)] λ )(1− PA) = 1
2
That is to say, P(f(Y ) = cA) ≥ P(Y 6∈ S) ≥ 12 ≥ P(Y ∈ S) ≥ P(f(Y ) = cB). The proof for Theorem 2 is finished.
Theorem 3(restated) (tight bound in binary case) In the same setting as Theorem 2, assume PA + PB ≤ 1 and PA ≥ 12 . ∀R
′ > −λ log[2(1 − PA)], ∃ base classifier f∗ and perturbation δ∗ with g∗(x) = arg maxc P (f ∗(x+ ) = c) and ‖δ‖1 = R′, s.t. g∗(x) 6= g∗(x+ δ∗).
Proof of Theorem 3: Here, we first set δ = (‖δ‖1, 0, . . . , 0). For simplification, we denote δ = ‖δ‖1.And define
A := { z : |z − δ| − |z| ≥ max{δ + 2λ log [ 2 ( 1− PA )] ,−δ} } Then, we can calculate that
P(X ∈ A) = Px(|x− δ| − |x| ≥ max{δ + 2λ log[2(1− PA)],−δ})
= ∫ −λ log[2(1−PA)] −∞ 1 2λ exp (−|x| λ )dx
= 1− ∫ ∞ −λ log[2(1−PA)] 1 2λ exp ( x λ )dx
= PA
(11)
where x ∼ 12λ exp (− |x| λ ), δ = ‖δ‖1 . Notice that if δ + 2λ log[2(1 − PA)] ≤ −δ, we will get the integral equation by choosing S′. With Eqn. (11), we have
P(X ∈ A) = PA ≤ P(f(X) = cA) (12)
Thus, plug Eqn. (12) into the results of Lem. 3.1, we have
P(Y ∈ A) ≤ P(f(Y ) = cA) (13)
Also, since Y = X + δ, it can be derived that P(Y ∈ A) = ∫ −λ log[2(1−PA)]−δ −∞ 1 2λ exp (−|x| λ )dx (14)
Here we use the consistency of X ∈ A and Y ∈ A. Since Y can be regarded as an offset of X , the integral limit should be translated into the same length. So, if ‖δ‖1 = δ ≤ −λ log[2(1 − PA)], by Eqn. (7) and Eqn. (14), we have
P(f(Y ) = cA) ≥ P(Y ∈ A) ≥ 1
2
This means that the results we get in binary case is a tight bound, and the worst-case δ appears when δ = (δ, 0, . . . , 0). Furthermore, if we slightly enlarge δ, there would be a case that the robustness is destroyed.
The proof for Theorem 3 is finished.
D WHY LAPLACE NOISE INSTEAD OF GAUSSIAN
In this section, we theoretically analyze the certification capabilities of Gaussian and Laplace noises. We will show that, given the same base classifier f the parameter of Laplace distributions λ is less sensitive than the parameter of Gaussian distributions σ. Given a base classifier f , where
f(x) = { cA |x| ≤ 1 cB o.w.
and two random smoothing functions
g1(x) = arg max c
P(f(x+ ) = c), ∼ L(0, λ),
g2(x) = arg max c
P(f(x+ ) = c), ∼ N (0, σ2),
we aim to prove that Laplace noises will better protect the original prediction than Gaussian noises. Formally, we compare their Rectified Optional Parameter Space (ROPS), defined as Λ = { √ 2λ : g1(x;λ) = f(x)} and Σ = {σ : g2(x;σ) = f(x)}. Note that the rectified term √
2 is due to the fact that σ = √ 2λ yield the same variance. Essentially, ROPS indicates the feasible region where the smoothing distribution does not negatively impact the base classifier, thus measuring the sensitivity of the smoothing distribution (the larger the better).
First, we would like to compare its prediction on a given point (x, f(x)) = (0, cA). We have
g1(0) = cA ⇐⇒ P(f(0 + ) = cA) ≥ 1 2 ⇐⇒ P(| | ≤ 1) = 1− exp(− 1 λ ) ≥ 1 2 ⇐⇒ λ ≤ 1 log 2 ,
g2(0) = cA ⇐⇒ P(f(0 + ) = cA) ≥ 1 2 ⇐⇒ P(| | ≤ 1) = 2Φ( 1 σ )− 1 ≥ 1 2 ⇐⇒ σ ≤ 1 Φ−1(3/4) .
Since √
2 log 2 > 1 Φ−1(3/4) , Laplace noises have a larger ROPS than Gaussian noises at the point x = 0.
The analysis can be further extended in two cases.
First, if we have x 6= 0, what is the corresponding ROPS that leads to the desired result (g(x) = f(x))? We show in Fig. 10 that we will have a larger ROPS under Laplace noises.
Second, if we have a fixed x but fixed a desired certified radius, what is the corresponding ROPS? We show in Fig. 11 that Laplace noises again have a larger ROPS.
We empirically validate this finding with ResNet110 on CIFAR-10. The resulting smoothed model has 24.8% clean accuracy under a Laplace noise, and 23.7% clean accuracy under a Gaussian noise (with the same variance as the Laplace noise). Here the accuracy is computed with respect to predictions of the base classifier instead of the labels (to illustrate how the smoothing impacts the predictions). | 1. What is the main contribution of the paper regarding certified classifiers?
2. What are the strengths and weaknesses of the proposed method compared to other works?
3. How does the reviewer assess the significance and value of the results, particularly regarding tightness under \ell_1 norm?
4. What are the concerns regarding the presentation and writing style of the paper? | Review | Review
Summary.
The authors propose a new certified classifier in \ell_1 norm that is tight. That is to say, upon smoothing a given classifier f with Laplacian noise, a smoothed version of that classifier (probabilistic maximum majority vote) is certified with a radius measured in \ell_1 norm. The authors show that this bound is tight for binary classifiers. These results are complementary to Cohen et al. results.
Major comments.
1) The major contribution of this paper is the tightness under the \ell_1 norm for a binary classifier. I do not find this particularly significant. The question is of what value is such a result other than a mathematical exercise. For instance a good justification that the paper is lacking could be one where authors show that their radius is indeed tighter than all other works. The paper still lacks this (I will elaborate on this later), although, their bounds are indeed tighter than Lecure's et al. Since it is not clear whether or not the new certified smoothed classifier has indeed the largest radius among all other works, then at least a justification for why would one prefer a Laplacian noise of a Gaussian noise. Why is Gaussian smoothing sufficient for this purpose given that we do not know for sure that the radius is larger? What value/advantages does this add? The authors motivate their work by saying deriving the tightest \ell_1 is difficult due to the "asymmetry" of the norm. While I do agree on this; however, this is not enough motivation as we we are doing doing abstract maths here.
The new derived radius is not really comparable to the Gaussian radius with \ell_2 radius and this is my major concern. By norm equivalence, we have that \ell_2 \leq \ell_1 \leq \sqrt{n} \ell_2 where n is the dimension. That is to say that the radius computed with \ell_1 is larger than the \ell_2 in some cases by a square root of dimension. The authors can correct me on this if I'm wrong, but for a fair comparison in worst case sense the radius of Cohen et al. should be scaled by \sqrt{n}. In such a scenario, it is really difficult to understand when does it make sense to tackle such a smoothing technique as opposed to Gaussian smoothing.
I would not have asked the authors about such a question if the authors derived generic radius under \ell_p smoothing (which is difficult of course). To this end, I believe since the motivation is not clear nor the results are generic enough, I find the work incremental specifically after noting that the radius can be deduced from the work of Li et al. where the main contribution here is the tightness of the radius for a binary classifier.
Moreover, I believe the paper still requires some polishing in terms of writing and presentation.
Some more comments.
I believe the paper can benefit from some rewriting. Here is a list of things the authors can do to improve the paper.
1) Define what M is, page 3 "and it is easy to see that M is a mixed random variable". I believe the authors meant T(x).
2) The figures are hardly readable. For instance, authors can perhaps increase the legend's font size in figures 4. Also the chosen colors are suboptimal (perhaps the line width of the plots) should be increased.
3) The section below Theorem 3 should be moved up to before Theorem 3 as this discusses the proof of Theorem 2. Once a Theorem is presented, the proof sketch should follow.
4) Experiments on the undefended classifier has to be in Figures 6 7 and 8.
5) Lastly, why are comparison between Cohen et. al. and Lecuyer et. al. in Figure 6 inconsistent with Figure 5 of Cohen et al. |
ICLR | Title
ON NEURAL NETWORK GENERALIZATION VIA PROMOTING WITHIN-LAYER ACTIVATION DIVERSITY
Abstract
During the last decade, neural networks have been intensively used to tackle various problems and they have often led to state-of-the-art results. These networks are composed of multiple jointly optimized layers arranged in a hierarchical structure. At each layer, the aim is to learn to extract hidden patterns needed to solve the problem at hand and forward it to the next layers. In the standard form, a neural network is trained with gradient-based optimization, where the errors are back-propagated from the last layer back to the first one. Thus at each optimization step, neurons at a given layer receive feedback from neurons belonging to higher layers of the hierarchy. In this paper, we propose to complement this traditional ’between-layer’ feedback with additional ’within-layer’ feedback to encourage diversity of the activations within the same layer. To this end, we measure the pairwise similarity between the outputs of the neurons and use it to model the layer’s overall diversity. By penalizing similarities and promoting diversity, we encourage each neuron to learn a distinctive representation and, thus, to enrich the data representation learned within the layer and to increase the total capacity of the model. We theoretically study how the within-layer activation diversity affects the generalization performance of a neural network in a supervised context and we prove that increasing the diversity of hidden activations reduces the estimation error. In addition to the theoretical guarantees, we present an empirical study confirming that the proposed approach enhances the performance of neural networks.
1 INTRODUCTION
Neural networks are a powerful class of non-linear function approximators that have been successfully used to tackle a wide range of problems. They have enabled breakthroughs in many tasks, such as image classification (Krizhevsky et al., 2012), speech recognition (Hinton et al., 2012a), and anomaly detection (Golan & El-Yaniv, 2018). Formally, the output of a neural network consisting of P layers can be defined as follows:
f(x;W) = φP (W P (φP−1(· · ·φ2(W 2φ1(W 1x)))), (1) where φi(.) is the element-wise activation function, e.g., ReLU and Sigmoid, of the ith layer and W = {W 1, . . . ,W P } are the corresponding weights of the network. The parameters of f(x;W) are optimized by minimizing the empirical loss:
L̂(f) = 1
N N∑ i=1 l ( f(xi;W), yi ) , (2)
where l(·) is the loss function, and {xi, yi}Ni=1 are the training samples and their associated groundtruth labels. The loss is minimized using the gradient decent-based optimization coupled with backpropagation.
However, neural networks are often over-parameterized, i.e., have more parameters than data. As a result, they tend to overfit to the training samples and not generalize well on unseen examples (Goodfellow et al., 2016). While research on Double descent (Belkin et al., 2019; Advani et al., 2020; Nakkiran et al., 2020) shows that over-parameterization does not necessarily lead to overfitting, avoiding overfitting has been extensively studied (Neyshabur et al., 2018; Nagarajan & Kolter,
2019; Poggio et al., 2017) and various approaches and strategies have been proposed, such as data augmentation (Goodfellow et al., 2016), regularization (Kukačka et al., 2017; Bietti et al., 2019; Arora et al., 2019), and dropout (Hinton et al., 2012b; Wang et al., 2019; Lee et al., 2019; Li et al., 2016), to close the gap between the empirical loss and the expected loss.
Diversity of learners is widely known to be important in ensemble learning (Li et al., 2012; Yu et al., 2011) and, particularly in deep learning context, diversity of information extracted by the network neurons has been recognized as a viable way to improve generalization (Xie et al., 2017a; 2015b). In most cases, these efforts have focused on making the set of weights more diverse (Yang et al.; Malkin & Bilmes, 2009). However, diversity of the activation has not received much attention.
Inspired by the motivation of dropout to co-adapt neuron activation, Cogswell et al. (2016) proposed to regularize the activations of the network. An additional loss using cross-covariance of hidden activations was proposed, which encourages the neurons to learn diverse or non-redundant representations. The proposed approach, known as Decov, has empirically been proven to alleviate overfitting and to improve the generalization ability of neural network, yet a theoretical analysis to prove this has so far been lacking.
In this work, we propose a novel approach to encourage activation diversity within the same layer. We propose complementing ’between-layer’ feedback with additional ’within-layer’ feedback to penalize similarities between neurons on the same layer. Thus, we encourage each neuron to learn a distinctive representation and to enrich the data representation learned within each layer. Moreover, inspired by Xie et al. (2015b), we provide a theoretical analysis showing that the within-layer activation diversity boosts the generalization performance of neural networks and reduces overfitting.
Our contributions in this paper are as follows:
• Methodologically, we propose a new approach to encourage the ’diversification’ of the layer-wise feature maps’ outputs in neural networks. The proposed approach has three variants based on how the global diversity is defined. The main intuition is that by promoting the within-layer activation diversity, neurons within the same layer learn distinct patterns and, thus, increase the overall capacity of the model.
• Theoretically, we analyse the effect the within-layer activation diversity on the generalization error bound of neural network. The analysis is presented in Section 3. As shown in Theorems 3.7, 3.8, 3.9, 3.10, 3.11, and 3.12, we express the upper-bound of the estimation error as a function of the diversity factor. Thus, we provide theoretical evidence that the within-layer activation diversity can help reduce the generalization error.
• Empirically, we show that the within-layer activation diversity boosts the performance of neural networks. Experimental results show that the proposed approach outperforms the competing methods.
2 WITHIN-LAYER ACTIVATION DIVERSITY
We propose a diversification strategy, where we encourage neurons within a layer to activate in a mutually different manner, i.e., to capture different patterns. To this end, we propose an additional within-layer loss which penalizes the neurons that activate similarly. The loss function L̂(f) defined in equation 2 is augmented as follows:
L̂aug(f) = L̂(f) + λ P∑ i=1 J i, (3)
where J i expresses the overall pair-wise similarity of the neurons within the ith layer and λ is the penalty coefficient for the diversity loss. As in (Cogswell et al., 2016), our proposed diversity loss can be applied to a single layer or multiple layers in a network. For simplicity, let us focus on a single layer.
Let φin(xj) and φ i m(xj) be the outputs of the n th and mth neurons in the ith layer for the same input sample xj . The similarity snm between the the nth and mth neurons can be obtained as the average similarity measure of their outputs for N input samples. We use the radial basis function to
express the similarity:
snm = 1
N N∑ j=1 exp ( − γ||φin(xj)− φim(xj)||2 ) , (4)
where γ is a hyper-parameter. The similarity snm can be computed over the whole dataset or batchwise. Intuitively, if two neurons n andm have similar outputs for many samples, their corresponding similarity snm will be high. Otherwise, their similarity smn is small and they are considered “diverse”. Based on these pair-wise similarities, we propose three variants for the global diversity loss J i of the ith layer:
• Direct: J i = ∑ n 6=m snm. In this variant, we model the global layer similarity directly
as the sum of the pairwise similarities between the neurons. By minimizing their sum, we encourage the neurons to learn different representations.
• Det: J i = −det(S), where S is a similarity matrix defined as Snm = snm. This variant is inspired by the Determinantal Point Process (DPP) (Kulesza & Taskar, 2010; 2012), as the determinant of S measures the global diversity of the set. Geometrically, det(S) is the volume of the parallelepiped formed by vectors in the feature space associated with s. Vectors that result in a larger volume are considered to be more “diverse”. Thus, maximizing det(·) (minimizing −det(·)) encourages the diversity of the learned features.
• Logdet: J i = −logdet(S)1. This variant has the same motivation as the second one. We use logdet instead of det as logdet is a convex function over the positive definite matrix space.
It should be noted here that the first proposed variant, i.e., direct, similar to Decov (Cogswell et al., 2016), captures only the pairwise diversity between components and is unable to capture the higherorder “diversity”, whereas the other two variants consider the global similarity and are able to measure diversity in a more global manner.
Our newly proposed loss function defined in equation 3 has two terms. The first term is the classic loss function. It computes the loss with respect to the ground-truth. In the back-propagation, this feedback is back-propagated from the last layer to the first layer of the network. Thus, it can be considered as a between-layer feedback, whereas the second term is computed within a layer. From equation 3, we can see that our proposed approach can be interpreted as a regularization scheme. However, regularization in deep learning is usually applied directly on the parameters, i.e., weights (Goodfellow et al., 2016; Kukačka et al., 2017), while in our approach, similar to (Cogswell et al., 2016), an additional term is defined over the output maps of the layers. For a layer with C neurons and a batch size of N , the additional computational cost is O(C2(N + 1)) for direct variant and O(C3 + C2N)) for both the determinant and log of the determinant variants.
3 GENERALIZATION ERROR ANALYSIS
In this section, we analyze how the proposed within-layer diversity regularizer affects the generalization error of a neural network. Generalization theory (Zhang et al., 2017; Kawaguchi et al., 2017) focuses on the relation between the empirical loss, as defined in equation 2, and the expected risk defined as follows:
L(f) = E(x,y)∼Q[l(f(x), y)], (5)
where Q is the underlying distribution of the dataset. Let f∗ = argminf L(f) be the expected risk minimizer and f̂ = argminf L̂(f) be the empirical risk minimizer. We are interested in the estimation error, i.e., L(f∗)−L(f̂), defined as the gap in the loss between both minimizers (Barron, 1994). The estimation error represents how well an algorithm can learn. It usually depends on the complexity of the hypothesis class and the number of training samples (Barron, 1993; Zhai & Wang, 2018).
1This is defined only if S is positive definite. It can be shown that in our case S is positive semi-definite. Thus, in practice we use a regularized version (S + I) to ensure the positive definiteness.
Several techniques have been used to quantify the estimation error, such as PAC learning (Hanneke, 2016; Arora et al., 2018), VC dimension (Sontag, 1998; Harvey et al., 2017; Bartlett et al., 2019), and the Rademacher complexity (Xie et al., 2015b; Zhai & Wang, 2018; Tang et al., 2020). The Rademacher complexity has been widely used as it usually leads to a tighter generalization error bound (Sokolic et al., 2016; Neyshabur et al., 2018; Golowich et al., 2018). The formal definition of the empirical Rademacher complexity is given as follows: Definition 3.1. (Bartlett & Mendelson, 2002) For a given dataset with N samples D = {xi, yi}Ni=1 generated by a distribution Q and for a model space F : X → R with a single dimensional output, the empirical Rademacher complexityRN (F) of the set F is defined as follows:
RN (F) = Eσ [ sup f∈F 1 N N∑ i=1 σif(xi) ] , (6)
where the Rademacher variables σ = {σ1, · · · , σN} are independent uniform random variables in {−1, 1}.
In this work, we analyse the estimation error bound of a neural network using the Rademacher complexity and we are interested in the effect of the within-layer diversity on the estimation error. In order to study this effect, inspired by (Xie et al., 2015b), we assume that with a high probability τ, the distance between the output of each pair of neurons, (φn(x)−φm(x))2, is lower bounded by dmin for any input x. Note that this condition can be expressed in terms of the similarity s defined in equation 4: snm ≤ e(−γdmin) = smin for any two distinct neurons with the probability τ . Our analysis starts with the following lemma: Lemma 3.2. (Xie et al., 2015b; Bartlett & Mendelson, 2002) With a probability of at least 1− δ
L(f̂)− L(f∗) ≤ 4RN (A) +B √ 2 log(2/δ)
N (7)
for B ≥ supx,y,f |l(f(x), y)|, whereRN (A) is the Rademacher complexity of the loss set A.
It upper-bounds the estimation error using the Rademacher complexity defined over the loss set and supx,y,f |l(f(x), y)|. Our analysis continues by seeking a tighter upper bound of this error and showing how the within-layer diversity, expressed with dmin, affects this upper bound. We start by deriving such an upper-bound for a simple network with one hidden layer trained for a regression task and then we extend it for a general multi-layer network and for different losses.
3.1 SINGLE HIDDEN-LAYER NETWORK
Here, we consider a simple neural network with one hidden-layer with M neurons and onedimensional output trained for a regression task. The full characterization of the setup can be summarized in the following assumptions: Assumptions 1.
• The activation function of the hidden layer, φ(t), is a Lφ-Lipschitz continuous function.
• The input vector x ∈ RD satisfies ||x||2 ≤ C1.
• The output scalar y ∈ R satisfies |y| ≤ C2.
• The weight matrix W = [w1,w2, · · · ,wM ] ∈ RD×M connecting the input to the hidden layer satisfies ||wm||2 ≤ C3.
• The weight vector v ∈ RM connecting the hidden-layer to the output neuron satisfies ||v||2 ≤ C4.
• The hypothesis class is F = {f |f(x) = ∑M m=1 vmφm(x) = ∑M m=1 vmφ(w T mx)}.
• Loss function set is A = {l|l(f(x), y) = 12 |f(x)− y| 2}.
• With a probability τ , for n 6= m, ||φn(x)− φm(x)||22 = ||φ(wTnx)− φ(wTmx)||22 ≥ dmin.
We recall the following two lemmas related to the estimation error and the Rademacher complexity: Lemma 3.3. (Bartlett & Mendelson, 2002) For F ∈ RX , assume that g : R −→ R is a Lg-Lipschitz continuous function and A = {g ◦ f : f ∈ F}. Then we have
RN (A) ≤ LgRN (F). (8) Lemma 3.4. (Xie et al., 2015b) Under Assumptions 1, the Rademacher complexity RN (F) of the hypothesis class F = {f |f(x) = ∑M m=1 vmφm(x) = ∑M m=1 vmφ(w T mx)} can be upper-bounded as follows:
RN (F) ≤ 2LφC134 √ M√
N + C4|φ(0)|
√ M√
N , (9)
where C134 = C1C3C4 and φ(0) is the output of the activation function at the origin.
Lemma 3.4 provides an upper-bound of the Rademacher complexity for the hypothesis class. In order to find an upper-bound for our estimation error, we start by deriving an upper bound for supx,f |f(x)|: Lemma 3.5. Under Assumptions 1, with a probability at least τQ, we have
sup x,f |f(x)| ≤
√ J , (10)
where Q is equal to the number of neuron pairs defined by M neurons, i.e., Q = M(M−1)2 , and J = C24 ( MC25 +M(M − 1)(C25 − d2min/2) ) and C5 = LφC1C3 + φ(0),
The proof can be found in Appendix 7.1. Note that in Lemma 3.5, we have expressed the upperbound of supx,f |f(x)| in terms of dmin. Using this bound, we can now find an upper-bound for supx,f,y |l(f(x), y)| in the following lemma: Lemma 3.6. Under Assumptions 1, with a probability at least τQ, we have
sup x,y,f
|l(f(x), y)| ≤ ( √ J + C2)2. (11)
The proof can be found in Appendix 7.2. The main goal is to analyze the estimation error bound of the neural network and to see how its upper-bound is linked to the diversity, expressed by dmin, of the different neurons. The main result is presented in Theorem 3.7. Theorem 3.7. Under Assumptions 1, with probability at least τQ(1− δ), we have
L(f̂)−L(f∗) ≤ 8 (√ J +C2 )( 2LφC134+C4|φ(0)| )√M√ N +( √ J +C2)2 √ 2 log(2/δ) N (12)
where C134 = C1C3C4, J = C24 ( MC25 +M(M −1)(C25 −d2min/2) ) , and C5 = LφC1C3+φ(0).
The proof can be found in Appendix 7.3. Theorem 3.7 provides an upper-bound for the estimation error. We note that it is a decreasing function of dmin. Thus, we say that a higher dmin, i.e., more diverse activations, yields a lower estimation error bound. In other words, by promoting the withinlayer diversity, we can reduce the generalization error of neural networks. It should be also noted that our Theorem 3.7 has a similar form to Theorem 1 in (Xie et al., 2015b). However, the main difference is that Xie et al. analyse the estimation error with respect to the diversity of the weight vectors. Here, we consider the diversity between the outputs of the activations of the hidden neurons.
3.2 BINARY CLASSIFICATION
We now extend our analysis of the effect of the within-layer diversity on the generalization error in the case of a binary classification task, i.e., y ∈ {−1, 1}. The extensions of Theorem 3.7 in the case of a hinge loss and a logistic loss are presented in Theorems 3.8 and 3.9, respectively. Theorem 3.8. Using the hinge loss, we have with probability at least τQ(1− δ)
L(f̂)− L(f∗) ≤ 4 ( 2LφC134 + C4|φ(0)| )√M√ N + (1 + √ J ) √ 2 log(2/δ) N (13)
where C134 = C1C3C4, J = C24 (MC25 +M(M − 1)(C25 − d2min/2) ) , and C5 = LφC1C3+φ(0).
Theorem 3.9. Using the logistic loss l(f(x), y) = log(1 + e−yf(x)), we have with probability at least τQ(1− δ)
L(f̂)− L(f∗) ≤ 4 1 + e √ −J
( 2LφC134 + C4|φ(0)| )√M√ N + log(1 + e √ J ) √ 2 log(2/δ) N (14)
where C134 = C1C3C4, J = C24 (MC25 +M(M − 1)(C25 − d2min/2) ) , and C5 = LφC1C3+φ(0).
The proofs are similar to Lemmas 7 and 8 in (Xie et al., 2015b). As we can see, for the classification task, the error bounds of the estimation error for the hinge and logistic losses are decreasing with respect to dmin. Thus, employing a diversity strategy can improve the generalization also for the binary classification task.
3.3 MULTI-LAYER NETWORKS
Here, we extend our result for networks with P (> 1) hidden layers. We assume that the pair-wise distances between the activations within layer p are lower-bounded by dpmin with a probability τ
p. In this case, the hypothesis class can be defined recursively. In addition, we replace the fourth assumption in Assumptions 1 with: ||W p||∞ ≤ Cp3 for every W p, i.e., the weight matrix of the p-th layer. In this case, the main theorem is extended as follows:
Theorem 3.10. With probability of at least ∏P−1 p=0 (τ p)Q p (1− δ), we have
L(f̂)− L(f∗) ≤ 8( √ J + C2)
( (2Lφ)
PC1C 0 3√
N
P−1∏ p=0 √ MpCp3 + |φ(0)|√ N P−1∑ p=0 (2Lφ) P−1−p P−1∏ j=p √ M jCj3
)
+ (√ J + C2 )2√2 log(2/δ) N
(15)
where Qp is the number of neuron pairs in the pth layer, defined as Qp = M p(Mp−1)
2 , and J P is defined recursively using the following identities: J 0 = C03C1 and J p = MpCp2 ( Mp2(LφC p−1 3 J p−1 + φ(0))2 −M(M − 1) dpmin 2 2 ) ) , for p = 1, . . . , P .
The proof can be found in Appendix 7.4. In Theorem 3.10, we see thatJ P is decreasing with respect to dpmin. Thus, we see that maximizing the within-layer diversity, we can reduce the estimation error of a multi-layer neural network.
3.4 MULTIPLE OUTPUTS
Finally, we consider the case of a neural network with a multi-dimensional output, i.e., y ∈ RD. In this case, we can extend Theorem 3.7 by decomposing the problem into D smaller problems and deriving the global error bound as the sum of the small D bounds. This yields the following two theorems: Theorem 3.11. For a multivariate regression trained with the squared error, we have with probability at least τQ(1− δ),
L(f̂)−L(f∗) ≤ 8D( √ J +C2) ( 2LφC134+C4|φ(0)| )√M√ N +D( √ J +C2)2 √ 2 log(2/δ) N (16)
where C134 = C1C3C4, J = C24 (MC25 +M(M − 1)(C25 − d2min/2) )
and C5 = LφC1C3 + φ(0). Theorem 3.12. For a multi-class classification task using the cross-entropy loss, we have with probability at least τQ(1− δ),
L(f̂)− L(f∗) ≤ D(D − 1) D − 1 + e−2 √ J
( 2LφC134 + C4|φ(0)| )√M√ N + log ( 1 + (D − 1)e2 √ J )√2 log(2/δ)
N (17) where C134 = C1C3C4, J = C24 (MC25 +M(M − 1)(C25 − d2min/2) ) and C5 = LφC1C3+φ(0).
The proofs can be found in Appendix 7.5. Theorems 3.11 and 3.12 extend our result for the multidimensional regression and classification tasks, respectively. Both bounds are inversely proportional to the diversity factor dmin. We note that for the classification task, the upper-bound is exponentially decreasing with respect to dmin.
4 RELATED WORK
Diversity promoting strategies have been widely used in ensemble learning (Li et al., 2012; Yu et al., 2011), sampling (Derezinski et al., 2019; Bıyık et al., 2019; Gartrell et al., 2019), ranking (Yang et al.; Gan et al., 2020), and pruning by reducing redundancy (Kondo & Yamauchi, 2014; He et al., 2019; Singh et al., 2020; Lee et al., 2020). In the deep learning context, various approaches have used diversity as a direct regularizer on top of the weight parameters. Here, we present a brief overview of these regularizers. Based on the way diversity is defined, we can group these approaches into two categories. The first group considers the regularizers that are based on the pairwise dissimilarity of components, i.e., the overall set of weights are diverse if every pair of weights are dissimilar. Given the weight vectors {wm}Mm=1, Yu et al. (2011) define the regularizer as∑ mn(1− θmn), where θmn represents the cosine similarity betweenwm andwn. Bao et al. (2013)
proposed an incoherence score defined as − log (
1 M(M−1) ∑ mn β|θmn| 1 β ) , where β is a positive
hyperparameter. Xie et al. (2015a; 2016) used mean(θmn) − var(θmn) to regularize Boltzmann machines. They theoretically analyzed its effect on the generalization error bounds in (Xie et al., 2015b) and extended it to kernel space in (Xie et al., 2017a). The second group of regularizers considers a more globalist view of diversity. For example, in (Malkin & Bilmes, 2009; 2008; Xie et al., 2017b), a weight regularization based on the determinant of the weights covariance is proposed and based on determinantal point process in (Kulesza & Taskar, 2012; Kwok & Adams, 2012).
Unlike the aforementioned methods which promote diversity on the weight level and similar to our method, Cogswell et al. (2016) proposed to enforce dissimilarity on the feature map outputs, i.e., on the activations. To this end, they proposed an additional loss based on the pairwise covariance of the activation outputs. Their additional loss, LDecov is defined as the squared sum of the non-diagonal elements of the global covariance matrix C:
LDecov = 1
2 (||C||2F − ||diag(C)||22), (18)
where ||.||F is the Frobenius norm. Their approach, Decov, yielded superior empirical performance; however, it lacks theoretical proof. In this paper, we closed this gap and we showed theoretically how employing a diversity strategy on the network activations can indeed decrease the estimation error bound and improve the generalization of the model. Besides, we proposed variants of our approach which consider a global view of diversity.
5 EXPERIMENTAL RESULTS
In this section, we present an empirical study of our approach in a regression context using Boston Housing price dataset (Dua & Graff, 2017) and in a classification context using CIFAR10 and CIFAR100 datasets (Krizhevsky et al., 2009). We denote as Vanilla the model trained with no diversity protocol and as Decov the approach proposed in (Cogswell et al., 2016).
5.1 REGRESSION
For regression, we use the Boston Housing price dataset (Dua & Graff, 2017). It has 404 training samples and 102 test samples with 13 attributes each. We hold the last 100 sample of training as a validation set for the hyper-parameter tuning. The loss weight, is chosen from {0.00001, 0.00005, 0.0001, 0.0005, 0.001, 0.005} for both our approach and Decov (Cogswell et al., 2016). Parameter γ in the radial basis function is chosen from {0.00001, 0.0001, 0.01, 0.1.1, 10, 100}. As a base model, we use a neural network composed of two fully connected hidden layers, each with 64 neurons. The additional loss is applied on top of both hidden layers.
We train for 80 epochs using stochastic gradient descent with a learning rate of 0.01 and mean square error loss. For hyperparamrter tuning, we keep the model that perform best on the validation and use it in the test phase. We experiment with three different activation functions for the hidden layers: Sigmoid, Rectified Linear Units (ReLU) (Nair & Hinton, 2010), and LeakyReLU (Maas et al., 2013).
Table 1 reports the results in terms of the mean average error for the different approaches over the Boston Housing price dataset. First, we note that employing a diversification strategy (ours and Decov) boosts the results compared to the Vanilla approach for all types of activations. The three variants of our approach, i.e., the within-layer approach, consistently outperform the Decov loss except for the LeakyReLU where the latter outperforms our direct variant. Table 1 shows that the logdet variant of our approach yields the best performance for all three activation types.
5.2 CLASSIFICATION
For classification, we evaluate the performance of our approach on CIFAR10 and CIFAR100 datasets (Krizhevsky et al., 2009). They contain 60,000 32 × 32 images grouped into 10 and 100 distinct categories, respectively. We train on the 50,000 given training examples and test on the 10,000 specified test samples. We hold the last 10000 of the training set for validation. For the neural network model, we use an architecture composed of 3 convolutional layers. Each convolution layer is composed of 32 3 × 3 filters followed by 2 × 2 max pooling. The flattened output of the convolutional layers is connected to a fully connected layer with 128 neurons and a softmax layer. The different additional losses, i.e., ours and Decov, are added only on top of the fully connected layer. The models are trained for 150 epochs using stochastic gradient decent with a learning rate of 0.01 and categorical cross entropy loss. For hyper-paramters tuning, we keep the model that performs best on the validation set and use it in the test phase. We experiment with three different activation functions for the hidden layers: sigmoid, Rectified Linear Units (ReLU) (Nair & Hinton, 2010), and LeakyReLU (Maas et al., 2013). All reported results are average performance over 4 trials with the standard deviation indicated alongside.
Tables 2 and 3 report the test error rates of the different approaches for both datasets. Compared to the Vanilla network, our within-layer diversity strategies consistently improve the performance of the model. For the CIFAR10, the direct variant yields more than 0.72% improvement for the ReLU and 2% improvement for the sigmoid activation. For the LeakyReLU case, the determinant variant achieves the lowest error rate. This is in accordance with the results on CIFAR100. Here, we note that our proposed approach outperforms both the Vanilla and the Decov models, especially in the sigmoid case. Compared to the Vanilla approach, we note that the model training time cost on CIFAR100 increases by 9% for the direct approach, by 36.1% for the determinant variant, and by 36.2%for the log of determinant variant.
6 CONCLUSIONS
In this paper, we proposed a new approach to encourage ‘diversification’ of the layer-wise feature map outputs in neural networks. The main motivation is that by promoting within-layer activation diversity, neurons within the same layer learn to capture mutually distinct patterns. We proposed an additional loss term that can be added on top of any fully-connected layer. This term complements
the traditional ‘between-layer’ feedback with an additional ‘within-layer’ feedback encouraging diversity of the activations. We theoretically proved that the proposed approach decreases the estimation error bound, and thus improves the generalization ability of neural networks. This analysis was further supported by experimental results showing that such a strategy can indeed improve the performance of neural networks in regression and classification tasks. Our future work includes extensive experimental analysis on the relationship between the distribution of the neurons output and generalization.
7 APPENDIX
In the following proofs, we use Lipschitz analysis. In particular, a function f : A → R, A ⊂ Rn, is said to be L-Lipschitz, if there exist a constant L ≥ 0, such that |f(a) − f(b)| ≤ L||a − b|| for every pair of points a, b ∈ A. Moreover:
• supx∈A f ≤ sup(L||x||+ f(0)). • if f is continuous and differentiable, L = sup |f ′(x)|.
7.1 PROOF OF LEMMA 3.5
Lemma 3.5. Under Assumptions 1, with a probability at least τQ, we have sup x,f |f(x)| ≤ √ J , (19)
where Q is equal to the number of neuron pairs defined by M neurons, i.e. Q = M(M−1)2 , and J = C24 ( MC25 +M(M − 1)(C25 − d2min/2) ) and C5 = LφC1C3 + φ(0).
Proof.
f2(x) = ( M∑ m=1 vmφm(x) )2 ≤ ( M∑ m=1 ||v||∞φm(x) )2 ≤ ||v||2∞ ( M∑ m=1 φm(x) )2 ≤ C24 ( M∑ m=1 φm(x) )2
= C24 (∑ m,n φm(x)φn(x) ) = C24 ∑ m φm(x) 2 + ∑ m 6=n φn(x)φm(x) (20) We have supw,x φ(x) < sup(Lφ|wTx| + φ(0)) because φ is Lφ-Lipschitz. Thus, ||φ||∞ < LφC1C3 + φ(0) = C5. For the first term in equation 20, we have ∑ m φm(x)
2 < M(LφC1C3 + φ(0))
2 = MC25 . The second term, using the identity φm(x)φn(x) = 1 2 ( φm(x) 2 + φn(x) 2 − (φm(x)− φn(x))2 ) , can be rewritten as∑
m 6=n
φm(x)φn(x) = 1
2 ∑ m 6=n φm(x) 2 + φn(x) 2 − ( φm(x)− φn(x) )2 . (21)
In addition, we have with a probability τ , ||φm(x) − φn(x)||2 ≥ dmin for m 6= n. Thus, we have with a probability at least τQ:∑
m 6=n
φm(x)φn(x) ≤ 1
2 ∑ m6=n (2C25 − d2min) =M(M − 1)(C25 − d2min/2). (22)
Here Q is equal to the number of neuron pairs defined by M neurons, i.e, Q = M(M−1)2 . By putting everything back to equation 20, we have with a probability τQ,
f2(x) ≤ C24 ( MC25 +M(M − 1)(C25 − d2min/2) ) = J . (23)
Thus, with a probability τQ,
sup x,f |f(x)| ≤ √ sup x,f f(x)2 ≤ √ J . (24)
7.2 PROOF OF LEMMA 3.6
Lemma 3.6. Under Assumptions 1, with a probability at least τQ, we have sup x,y,f |l(f(x), y)| ≤ ( √ J + C2)2 (25)
Proof. We have supx,y,f |f(x) − y| ≤ 2 supx,y,f (|f(x)| + |y|) = 2( √ J + C2). Thus supx,y,f |l(f(x), y)| ≤ ( √ J + C2)2.
7.3 PROOF OF THEOREM 3.7
Theorem 3.7. Under Assumptions 1, with probability at least τQ(1− δ), we have
L(f̂)−L(f∗) ≤ 8 (√ J +C2 )( 2LφC134+C4|φ(0)| )√M√ N +( √ J +C2)2 √ 2 log(2/δ) N (26)
where C134 = C1C3C4, J = C24 ( MC25 +M(M −1)(C25 −d2min/2) ) , and C5 = LφC1C3+φ(0).
Proof. Given that l(·) is K-Lipschitz with a constant K = supx,y,f |f(x)− y| ≤ 2( √ J +C2), and using Lemma 3.3, we can show that RN (A) ≤ KRN (F) ≤ 2( √ J + C2)RN (F). For RN (F), we use the bound found in Lemma 3.4. Using Lemmas 3.2 and 3.6 completes the proof.
7.4 PROOF OF THEOREM 3.10
Theorem 3.10. Under Assumptions 1, with probability of at least ∏P−1 p=0 (τ p)Q p (1− δ), we have
L(f̂)− L(f∗) ≤ 8( √ J + C2)
( (2Lφ)
PC1C 0 3√
N
P−1∏ p=0 √ MpCp3 + |φ(0)|√ N P−1∑ p=0 (2Lφ) P−1−p P−1∏ j=p √ M jCj3
)
+ (√ J + C2 )2√2 log(2/δ) N
(27)
where Qp is the number of neuron pairs in the pth layer, defined as Qp = M p(Mp−1)
2 , and J P is defined recursively using the following identities: J 0 = C03C1 and J p = MpCp2 ( Mp2(LφC p−1 3 J p−1 + φ(0))2 −M(M − 1)d2min/2) ) , for p = 1, . . . , P .
Proof. Lemma 5 in (Xie et al., 2015b) provides an upper-bound for the hypothesis class. We denote by vp denote the outputs of the pth hidden layer before applying the activation function:
v0 = [w0 T 1 x, ....,w 0T M0x] (28)
vp = [ Mp−1∑ j=1 wpj,1φ(v p−1 j ), ...., Mp−1∑ j=1 wpj,Mpφ(v p−1 j )] (29)
vp = [wp1 T φp, ...,wpMp T φp], (30)
where φp = [φ(vp−11 ), · · · , φ(v p−1 Mp−1)]. We have
||vp||22 = Mp∑ m=1 (wpm Tφp)2 (31)
and wpm Tφp ≤ Cp3 ∑ n φ p n. Thus,
||vp||22 ≤ Mp∑ m=1 (Cp3 ∑ n φpn) 2 =MpCp3 2 ( ∑ n φpn) 2 =MpCp3 2 ∑ mn φpmφ p n. (32)
We use the same decomposition trick of φpmφ p n as in the proof of Lemma 3.5. We need to bound supx φ p:
sup x φp < sup(Lφ|wp−1j T vp−1|+ φ(0)) < Lφ||W p−1||∞||vp−1||22 + φ(0). (33)
Thus, we have ||vp||22 ≤MpCp 2(M2(LφCp−13 ||vp−1||22 + φ(0))2 −M(M − 1)d2min/2)) = J P . (34) We found a recursive bound for ||vp||22, we note that for p = 0, we have ||v0||22 ≤ ||W 0||∞C1 ≤ C03C1 = J 0. Thus,
sup x,fP∈FP |f(x)| = sup x,fP∈FP
|vP | ≤ √ J P . (35)
7.5 PROOFS OF THEOREMS 3.11 AND 3.12
Theorem 3.11. For a multivariate regression trained with the squared error, we have with probability at least τQ(1− δ),
L(f̂)−L(f∗) ≤ 8D( √ J +C2) ( 2LφC134+C4|φ(0)| )√M√ N +D( √ J +C2)2 √ 2 log(2/δ) N (36)
where C134 = C1C3C4, J = C24 (MC25 +M(M − 1)(C25 − d2min/2) ) , and C5 = LφC1C3+φ(0).
Proof. The squared loss ||f(x) − y||2 can be decomposed into D terms (f(x)k − yk)2. Using Theorem 3.7, we can derive the bound for each term.
Theorem 3.12. For a multiclass classification task using the cross-entropy loss, we have with probability at least τQ(1− δ),
L(f̂)− L(f∗) ≤ D(D − 1) D − 1 + e−2 √ J
( 2LφC134 + C4|φ(0)| )√M√ N + log ( 1 + (D − 1)e2 √ J )√2 log(2/δ)
N (37) where C134 = C1C3C4, J = C24 (MC25 +M(M −1)(C25 −d2min/2) ) , and C5 = LφC1C3+φ(0).
Proof. Using Lemma 9 in (Xie et al., 2015b), we have supf,x,y l = log ( 1 + (D − 1)e2 √ J ) and l is D−1 D−1+e−2 √ J -Lipschitz. Thus, using the decomposition property of the Rademacher complexity, we have
Rn(A) ≤ D(D − 1)
D − 1 + e−2 √ J
( 2LφC134
√ M√
N + C4|φ(0)|
√ M√
N
) . (38) | 1. What is the main contribution of the paper in terms of improving model performance?
2. What is the proposed method for encouraging within-layer activation diversity, and how does it help reduce generalization error?
3. How does the definition of within-layer diversity affect the distribution of the layer output on the unit ball?
4. Does the proposed method lead to a output similar to 'binarized' output?
5. What additional experiments or ablation studies could be conducted to better understand the effectiveness of the proposed method? | Review | Review
In this paper, the authors propose a technique to encourage the within-layer activation diversity and therefore improve the model performance. Specifically, they design a within-layer loss that add penalty to the similar neurons. They also showed that encouraging the within-layer diversity will help reduce the generalization error.
The paper is well-presented and authors provided enough intuition as well as theoretical evidence why the diversity would help. Although I did not check all the proofs, the results seem to be right.
The definition of within-layer diversity seems to be simply the concentration of the values of each individual neuron. How does that affect the distribution of the layer output on the unit ball? Will this lead to a output similar to 'binarized' output?
The experiment seems insufficient to support the argument. Only very simple neural networks on two toy examples are provided. More ablation study of the neural/layer output distribution would help better understanding this issue.
Overall I think this paper provides some insights to how the generalization error is related to the neuron outputs and vote for accept. |
ICLR | Title
ON NEURAL NETWORK GENERALIZATION VIA PROMOTING WITHIN-LAYER ACTIVATION DIVERSITY
Abstract
During the last decade, neural networks have been intensively used to tackle various problems and they have often led to state-of-the-art results. These networks are composed of multiple jointly optimized layers arranged in a hierarchical structure. At each layer, the aim is to learn to extract hidden patterns needed to solve the problem at hand and forward it to the next layers. In the standard form, a neural network is trained with gradient-based optimization, where the errors are back-propagated from the last layer back to the first one. Thus at each optimization step, neurons at a given layer receive feedback from neurons belonging to higher layers of the hierarchy. In this paper, we propose to complement this traditional ’between-layer’ feedback with additional ’within-layer’ feedback to encourage diversity of the activations within the same layer. To this end, we measure the pairwise similarity between the outputs of the neurons and use it to model the layer’s overall diversity. By penalizing similarities and promoting diversity, we encourage each neuron to learn a distinctive representation and, thus, to enrich the data representation learned within the layer and to increase the total capacity of the model. We theoretically study how the within-layer activation diversity affects the generalization performance of a neural network in a supervised context and we prove that increasing the diversity of hidden activations reduces the estimation error. In addition to the theoretical guarantees, we present an empirical study confirming that the proposed approach enhances the performance of neural networks.
1 INTRODUCTION
Neural networks are a powerful class of non-linear function approximators that have been successfully used to tackle a wide range of problems. They have enabled breakthroughs in many tasks, such as image classification (Krizhevsky et al., 2012), speech recognition (Hinton et al., 2012a), and anomaly detection (Golan & El-Yaniv, 2018). Formally, the output of a neural network consisting of P layers can be defined as follows:
f(x;W) = φP (W P (φP−1(· · ·φ2(W 2φ1(W 1x)))), (1) where φi(.) is the element-wise activation function, e.g., ReLU and Sigmoid, of the ith layer and W = {W 1, . . . ,W P } are the corresponding weights of the network. The parameters of f(x;W) are optimized by minimizing the empirical loss:
L̂(f) = 1
N N∑ i=1 l ( f(xi;W), yi ) , (2)
where l(·) is the loss function, and {xi, yi}Ni=1 are the training samples and their associated groundtruth labels. The loss is minimized using the gradient decent-based optimization coupled with backpropagation.
However, neural networks are often over-parameterized, i.e., have more parameters than data. As a result, they tend to overfit to the training samples and not generalize well on unseen examples (Goodfellow et al., 2016). While research on Double descent (Belkin et al., 2019; Advani et al., 2020; Nakkiran et al., 2020) shows that over-parameterization does not necessarily lead to overfitting, avoiding overfitting has been extensively studied (Neyshabur et al., 2018; Nagarajan & Kolter,
2019; Poggio et al., 2017) and various approaches and strategies have been proposed, such as data augmentation (Goodfellow et al., 2016), regularization (Kukačka et al., 2017; Bietti et al., 2019; Arora et al., 2019), and dropout (Hinton et al., 2012b; Wang et al., 2019; Lee et al., 2019; Li et al., 2016), to close the gap between the empirical loss and the expected loss.
Diversity of learners is widely known to be important in ensemble learning (Li et al., 2012; Yu et al., 2011) and, particularly in deep learning context, diversity of information extracted by the network neurons has been recognized as a viable way to improve generalization (Xie et al., 2017a; 2015b). In most cases, these efforts have focused on making the set of weights more diverse (Yang et al.; Malkin & Bilmes, 2009). However, diversity of the activation has not received much attention.
Inspired by the motivation of dropout to co-adapt neuron activation, Cogswell et al. (2016) proposed to regularize the activations of the network. An additional loss using cross-covariance of hidden activations was proposed, which encourages the neurons to learn diverse or non-redundant representations. The proposed approach, known as Decov, has empirically been proven to alleviate overfitting and to improve the generalization ability of neural network, yet a theoretical analysis to prove this has so far been lacking.
In this work, we propose a novel approach to encourage activation diversity within the same layer. We propose complementing ’between-layer’ feedback with additional ’within-layer’ feedback to penalize similarities between neurons on the same layer. Thus, we encourage each neuron to learn a distinctive representation and to enrich the data representation learned within each layer. Moreover, inspired by Xie et al. (2015b), we provide a theoretical analysis showing that the within-layer activation diversity boosts the generalization performance of neural networks and reduces overfitting.
Our contributions in this paper are as follows:
• Methodologically, we propose a new approach to encourage the ’diversification’ of the layer-wise feature maps’ outputs in neural networks. The proposed approach has three variants based on how the global diversity is defined. The main intuition is that by promoting the within-layer activation diversity, neurons within the same layer learn distinct patterns and, thus, increase the overall capacity of the model.
• Theoretically, we analyse the effect the within-layer activation diversity on the generalization error bound of neural network. The analysis is presented in Section 3. As shown in Theorems 3.7, 3.8, 3.9, 3.10, 3.11, and 3.12, we express the upper-bound of the estimation error as a function of the diversity factor. Thus, we provide theoretical evidence that the within-layer activation diversity can help reduce the generalization error.
• Empirically, we show that the within-layer activation diversity boosts the performance of neural networks. Experimental results show that the proposed approach outperforms the competing methods.
2 WITHIN-LAYER ACTIVATION DIVERSITY
We propose a diversification strategy, where we encourage neurons within a layer to activate in a mutually different manner, i.e., to capture different patterns. To this end, we propose an additional within-layer loss which penalizes the neurons that activate similarly. The loss function L̂(f) defined in equation 2 is augmented as follows:
L̂aug(f) = L̂(f) + λ P∑ i=1 J i, (3)
where J i expresses the overall pair-wise similarity of the neurons within the ith layer and λ is the penalty coefficient for the diversity loss. As in (Cogswell et al., 2016), our proposed diversity loss can be applied to a single layer or multiple layers in a network. For simplicity, let us focus on a single layer.
Let φin(xj) and φ i m(xj) be the outputs of the n th and mth neurons in the ith layer for the same input sample xj . The similarity snm between the the nth and mth neurons can be obtained as the average similarity measure of their outputs for N input samples. We use the radial basis function to
express the similarity:
snm = 1
N N∑ j=1 exp ( − γ||φin(xj)− φim(xj)||2 ) , (4)
where γ is a hyper-parameter. The similarity snm can be computed over the whole dataset or batchwise. Intuitively, if two neurons n andm have similar outputs for many samples, their corresponding similarity snm will be high. Otherwise, their similarity smn is small and they are considered “diverse”. Based on these pair-wise similarities, we propose three variants for the global diversity loss J i of the ith layer:
• Direct: J i = ∑ n 6=m snm. In this variant, we model the global layer similarity directly
as the sum of the pairwise similarities between the neurons. By minimizing their sum, we encourage the neurons to learn different representations.
• Det: J i = −det(S), where S is a similarity matrix defined as Snm = snm. This variant is inspired by the Determinantal Point Process (DPP) (Kulesza & Taskar, 2010; 2012), as the determinant of S measures the global diversity of the set. Geometrically, det(S) is the volume of the parallelepiped formed by vectors in the feature space associated with s. Vectors that result in a larger volume are considered to be more “diverse”. Thus, maximizing det(·) (minimizing −det(·)) encourages the diversity of the learned features.
• Logdet: J i = −logdet(S)1. This variant has the same motivation as the second one. We use logdet instead of det as logdet is a convex function over the positive definite matrix space.
It should be noted here that the first proposed variant, i.e., direct, similar to Decov (Cogswell et al., 2016), captures only the pairwise diversity between components and is unable to capture the higherorder “diversity”, whereas the other two variants consider the global similarity and are able to measure diversity in a more global manner.
Our newly proposed loss function defined in equation 3 has two terms. The first term is the classic loss function. It computes the loss with respect to the ground-truth. In the back-propagation, this feedback is back-propagated from the last layer to the first layer of the network. Thus, it can be considered as a between-layer feedback, whereas the second term is computed within a layer. From equation 3, we can see that our proposed approach can be interpreted as a regularization scheme. However, regularization in deep learning is usually applied directly on the parameters, i.e., weights (Goodfellow et al., 2016; Kukačka et al., 2017), while in our approach, similar to (Cogswell et al., 2016), an additional term is defined over the output maps of the layers. For a layer with C neurons and a batch size of N , the additional computational cost is O(C2(N + 1)) for direct variant and O(C3 + C2N)) for both the determinant and log of the determinant variants.
3 GENERALIZATION ERROR ANALYSIS
In this section, we analyze how the proposed within-layer diversity regularizer affects the generalization error of a neural network. Generalization theory (Zhang et al., 2017; Kawaguchi et al., 2017) focuses on the relation between the empirical loss, as defined in equation 2, and the expected risk defined as follows:
L(f) = E(x,y)∼Q[l(f(x), y)], (5)
where Q is the underlying distribution of the dataset. Let f∗ = argminf L(f) be the expected risk minimizer and f̂ = argminf L̂(f) be the empirical risk minimizer. We are interested in the estimation error, i.e., L(f∗)−L(f̂), defined as the gap in the loss between both minimizers (Barron, 1994). The estimation error represents how well an algorithm can learn. It usually depends on the complexity of the hypothesis class and the number of training samples (Barron, 1993; Zhai & Wang, 2018).
1This is defined only if S is positive definite. It can be shown that in our case S is positive semi-definite. Thus, in practice we use a regularized version (S + I) to ensure the positive definiteness.
Several techniques have been used to quantify the estimation error, such as PAC learning (Hanneke, 2016; Arora et al., 2018), VC dimension (Sontag, 1998; Harvey et al., 2017; Bartlett et al., 2019), and the Rademacher complexity (Xie et al., 2015b; Zhai & Wang, 2018; Tang et al., 2020). The Rademacher complexity has been widely used as it usually leads to a tighter generalization error bound (Sokolic et al., 2016; Neyshabur et al., 2018; Golowich et al., 2018). The formal definition of the empirical Rademacher complexity is given as follows: Definition 3.1. (Bartlett & Mendelson, 2002) For a given dataset with N samples D = {xi, yi}Ni=1 generated by a distribution Q and for a model space F : X → R with a single dimensional output, the empirical Rademacher complexityRN (F) of the set F is defined as follows:
RN (F) = Eσ [ sup f∈F 1 N N∑ i=1 σif(xi) ] , (6)
where the Rademacher variables σ = {σ1, · · · , σN} are independent uniform random variables in {−1, 1}.
In this work, we analyse the estimation error bound of a neural network using the Rademacher complexity and we are interested in the effect of the within-layer diversity on the estimation error. In order to study this effect, inspired by (Xie et al., 2015b), we assume that with a high probability τ, the distance between the output of each pair of neurons, (φn(x)−φm(x))2, is lower bounded by dmin for any input x. Note that this condition can be expressed in terms of the similarity s defined in equation 4: snm ≤ e(−γdmin) = smin for any two distinct neurons with the probability τ . Our analysis starts with the following lemma: Lemma 3.2. (Xie et al., 2015b; Bartlett & Mendelson, 2002) With a probability of at least 1− δ
L(f̂)− L(f∗) ≤ 4RN (A) +B √ 2 log(2/δ)
N (7)
for B ≥ supx,y,f |l(f(x), y)|, whereRN (A) is the Rademacher complexity of the loss set A.
It upper-bounds the estimation error using the Rademacher complexity defined over the loss set and supx,y,f |l(f(x), y)|. Our analysis continues by seeking a tighter upper bound of this error and showing how the within-layer diversity, expressed with dmin, affects this upper bound. We start by deriving such an upper-bound for a simple network with one hidden layer trained for a regression task and then we extend it for a general multi-layer network and for different losses.
3.1 SINGLE HIDDEN-LAYER NETWORK
Here, we consider a simple neural network with one hidden-layer with M neurons and onedimensional output trained for a regression task. The full characterization of the setup can be summarized in the following assumptions: Assumptions 1.
• The activation function of the hidden layer, φ(t), is a Lφ-Lipschitz continuous function.
• The input vector x ∈ RD satisfies ||x||2 ≤ C1.
• The output scalar y ∈ R satisfies |y| ≤ C2.
• The weight matrix W = [w1,w2, · · · ,wM ] ∈ RD×M connecting the input to the hidden layer satisfies ||wm||2 ≤ C3.
• The weight vector v ∈ RM connecting the hidden-layer to the output neuron satisfies ||v||2 ≤ C4.
• The hypothesis class is F = {f |f(x) = ∑M m=1 vmφm(x) = ∑M m=1 vmφ(w T mx)}.
• Loss function set is A = {l|l(f(x), y) = 12 |f(x)− y| 2}.
• With a probability τ , for n 6= m, ||φn(x)− φm(x)||22 = ||φ(wTnx)− φ(wTmx)||22 ≥ dmin.
We recall the following two lemmas related to the estimation error and the Rademacher complexity: Lemma 3.3. (Bartlett & Mendelson, 2002) For F ∈ RX , assume that g : R −→ R is a Lg-Lipschitz continuous function and A = {g ◦ f : f ∈ F}. Then we have
RN (A) ≤ LgRN (F). (8) Lemma 3.4. (Xie et al., 2015b) Under Assumptions 1, the Rademacher complexity RN (F) of the hypothesis class F = {f |f(x) = ∑M m=1 vmφm(x) = ∑M m=1 vmφ(w T mx)} can be upper-bounded as follows:
RN (F) ≤ 2LφC134 √ M√
N + C4|φ(0)|
√ M√
N , (9)
where C134 = C1C3C4 and φ(0) is the output of the activation function at the origin.
Lemma 3.4 provides an upper-bound of the Rademacher complexity for the hypothesis class. In order to find an upper-bound for our estimation error, we start by deriving an upper bound for supx,f |f(x)|: Lemma 3.5. Under Assumptions 1, with a probability at least τQ, we have
sup x,f |f(x)| ≤
√ J , (10)
where Q is equal to the number of neuron pairs defined by M neurons, i.e., Q = M(M−1)2 , and J = C24 ( MC25 +M(M − 1)(C25 − d2min/2) ) and C5 = LφC1C3 + φ(0),
The proof can be found in Appendix 7.1. Note that in Lemma 3.5, we have expressed the upperbound of supx,f |f(x)| in terms of dmin. Using this bound, we can now find an upper-bound for supx,f,y |l(f(x), y)| in the following lemma: Lemma 3.6. Under Assumptions 1, with a probability at least τQ, we have
sup x,y,f
|l(f(x), y)| ≤ ( √ J + C2)2. (11)
The proof can be found in Appendix 7.2. The main goal is to analyze the estimation error bound of the neural network and to see how its upper-bound is linked to the diversity, expressed by dmin, of the different neurons. The main result is presented in Theorem 3.7. Theorem 3.7. Under Assumptions 1, with probability at least τQ(1− δ), we have
L(f̂)−L(f∗) ≤ 8 (√ J +C2 )( 2LφC134+C4|φ(0)| )√M√ N +( √ J +C2)2 √ 2 log(2/δ) N (12)
where C134 = C1C3C4, J = C24 ( MC25 +M(M −1)(C25 −d2min/2) ) , and C5 = LφC1C3+φ(0).
The proof can be found in Appendix 7.3. Theorem 3.7 provides an upper-bound for the estimation error. We note that it is a decreasing function of dmin. Thus, we say that a higher dmin, i.e., more diverse activations, yields a lower estimation error bound. In other words, by promoting the withinlayer diversity, we can reduce the generalization error of neural networks. It should be also noted that our Theorem 3.7 has a similar form to Theorem 1 in (Xie et al., 2015b). However, the main difference is that Xie et al. analyse the estimation error with respect to the diversity of the weight vectors. Here, we consider the diversity between the outputs of the activations of the hidden neurons.
3.2 BINARY CLASSIFICATION
We now extend our analysis of the effect of the within-layer diversity on the generalization error in the case of a binary classification task, i.e., y ∈ {−1, 1}. The extensions of Theorem 3.7 in the case of a hinge loss and a logistic loss are presented in Theorems 3.8 and 3.9, respectively. Theorem 3.8. Using the hinge loss, we have with probability at least τQ(1− δ)
L(f̂)− L(f∗) ≤ 4 ( 2LφC134 + C4|φ(0)| )√M√ N + (1 + √ J ) √ 2 log(2/δ) N (13)
where C134 = C1C3C4, J = C24 (MC25 +M(M − 1)(C25 − d2min/2) ) , and C5 = LφC1C3+φ(0).
Theorem 3.9. Using the logistic loss l(f(x), y) = log(1 + e−yf(x)), we have with probability at least τQ(1− δ)
L(f̂)− L(f∗) ≤ 4 1 + e √ −J
( 2LφC134 + C4|φ(0)| )√M√ N + log(1 + e √ J ) √ 2 log(2/δ) N (14)
where C134 = C1C3C4, J = C24 (MC25 +M(M − 1)(C25 − d2min/2) ) , and C5 = LφC1C3+φ(0).
The proofs are similar to Lemmas 7 and 8 in (Xie et al., 2015b). As we can see, for the classification task, the error bounds of the estimation error for the hinge and logistic losses are decreasing with respect to dmin. Thus, employing a diversity strategy can improve the generalization also for the binary classification task.
3.3 MULTI-LAYER NETWORKS
Here, we extend our result for networks with P (> 1) hidden layers. We assume that the pair-wise distances between the activations within layer p are lower-bounded by dpmin with a probability τ
p. In this case, the hypothesis class can be defined recursively. In addition, we replace the fourth assumption in Assumptions 1 with: ||W p||∞ ≤ Cp3 for every W p, i.e., the weight matrix of the p-th layer. In this case, the main theorem is extended as follows:
Theorem 3.10. With probability of at least ∏P−1 p=0 (τ p)Q p (1− δ), we have
L(f̂)− L(f∗) ≤ 8( √ J + C2)
( (2Lφ)
PC1C 0 3√
N
P−1∏ p=0 √ MpCp3 + |φ(0)|√ N P−1∑ p=0 (2Lφ) P−1−p P−1∏ j=p √ M jCj3
)
+ (√ J + C2 )2√2 log(2/δ) N
(15)
where Qp is the number of neuron pairs in the pth layer, defined as Qp = M p(Mp−1)
2 , and J P is defined recursively using the following identities: J 0 = C03C1 and J p = MpCp2 ( Mp2(LφC p−1 3 J p−1 + φ(0))2 −M(M − 1) dpmin 2 2 ) ) , for p = 1, . . . , P .
The proof can be found in Appendix 7.4. In Theorem 3.10, we see thatJ P is decreasing with respect to dpmin. Thus, we see that maximizing the within-layer diversity, we can reduce the estimation error of a multi-layer neural network.
3.4 MULTIPLE OUTPUTS
Finally, we consider the case of a neural network with a multi-dimensional output, i.e., y ∈ RD. In this case, we can extend Theorem 3.7 by decomposing the problem into D smaller problems and deriving the global error bound as the sum of the small D bounds. This yields the following two theorems: Theorem 3.11. For a multivariate regression trained with the squared error, we have with probability at least τQ(1− δ),
L(f̂)−L(f∗) ≤ 8D( √ J +C2) ( 2LφC134+C4|φ(0)| )√M√ N +D( √ J +C2)2 √ 2 log(2/δ) N (16)
where C134 = C1C3C4, J = C24 (MC25 +M(M − 1)(C25 − d2min/2) )
and C5 = LφC1C3 + φ(0). Theorem 3.12. For a multi-class classification task using the cross-entropy loss, we have with probability at least τQ(1− δ),
L(f̂)− L(f∗) ≤ D(D − 1) D − 1 + e−2 √ J
( 2LφC134 + C4|φ(0)| )√M√ N + log ( 1 + (D − 1)e2 √ J )√2 log(2/δ)
N (17) where C134 = C1C3C4, J = C24 (MC25 +M(M − 1)(C25 − d2min/2) ) and C5 = LφC1C3+φ(0).
The proofs can be found in Appendix 7.5. Theorems 3.11 and 3.12 extend our result for the multidimensional regression and classification tasks, respectively. Both bounds are inversely proportional to the diversity factor dmin. We note that for the classification task, the upper-bound is exponentially decreasing with respect to dmin.
4 RELATED WORK
Diversity promoting strategies have been widely used in ensemble learning (Li et al., 2012; Yu et al., 2011), sampling (Derezinski et al., 2019; Bıyık et al., 2019; Gartrell et al., 2019), ranking (Yang et al.; Gan et al., 2020), and pruning by reducing redundancy (Kondo & Yamauchi, 2014; He et al., 2019; Singh et al., 2020; Lee et al., 2020). In the deep learning context, various approaches have used diversity as a direct regularizer on top of the weight parameters. Here, we present a brief overview of these regularizers. Based on the way diversity is defined, we can group these approaches into two categories. The first group considers the regularizers that are based on the pairwise dissimilarity of components, i.e., the overall set of weights are diverse if every pair of weights are dissimilar. Given the weight vectors {wm}Mm=1, Yu et al. (2011) define the regularizer as∑ mn(1− θmn), where θmn represents the cosine similarity betweenwm andwn. Bao et al. (2013)
proposed an incoherence score defined as − log (
1 M(M−1) ∑ mn β|θmn| 1 β ) , where β is a positive
hyperparameter. Xie et al. (2015a; 2016) used mean(θmn) − var(θmn) to regularize Boltzmann machines. They theoretically analyzed its effect on the generalization error bounds in (Xie et al., 2015b) and extended it to kernel space in (Xie et al., 2017a). The second group of regularizers considers a more globalist view of diversity. For example, in (Malkin & Bilmes, 2009; 2008; Xie et al., 2017b), a weight regularization based on the determinant of the weights covariance is proposed and based on determinantal point process in (Kulesza & Taskar, 2012; Kwok & Adams, 2012).
Unlike the aforementioned methods which promote diversity on the weight level and similar to our method, Cogswell et al. (2016) proposed to enforce dissimilarity on the feature map outputs, i.e., on the activations. To this end, they proposed an additional loss based on the pairwise covariance of the activation outputs. Their additional loss, LDecov is defined as the squared sum of the non-diagonal elements of the global covariance matrix C:
LDecov = 1
2 (||C||2F − ||diag(C)||22), (18)
where ||.||F is the Frobenius norm. Their approach, Decov, yielded superior empirical performance; however, it lacks theoretical proof. In this paper, we closed this gap and we showed theoretically how employing a diversity strategy on the network activations can indeed decrease the estimation error bound and improve the generalization of the model. Besides, we proposed variants of our approach which consider a global view of diversity.
5 EXPERIMENTAL RESULTS
In this section, we present an empirical study of our approach in a regression context using Boston Housing price dataset (Dua & Graff, 2017) and in a classification context using CIFAR10 and CIFAR100 datasets (Krizhevsky et al., 2009). We denote as Vanilla the model trained with no diversity protocol and as Decov the approach proposed in (Cogswell et al., 2016).
5.1 REGRESSION
For regression, we use the Boston Housing price dataset (Dua & Graff, 2017). It has 404 training samples and 102 test samples with 13 attributes each. We hold the last 100 sample of training as a validation set for the hyper-parameter tuning. The loss weight, is chosen from {0.00001, 0.00005, 0.0001, 0.0005, 0.001, 0.005} for both our approach and Decov (Cogswell et al., 2016). Parameter γ in the radial basis function is chosen from {0.00001, 0.0001, 0.01, 0.1.1, 10, 100}. As a base model, we use a neural network composed of two fully connected hidden layers, each with 64 neurons. The additional loss is applied on top of both hidden layers.
We train for 80 epochs using stochastic gradient descent with a learning rate of 0.01 and mean square error loss. For hyperparamrter tuning, we keep the model that perform best on the validation and use it in the test phase. We experiment with three different activation functions for the hidden layers: Sigmoid, Rectified Linear Units (ReLU) (Nair & Hinton, 2010), and LeakyReLU (Maas et al., 2013).
Table 1 reports the results in terms of the mean average error for the different approaches over the Boston Housing price dataset. First, we note that employing a diversification strategy (ours and Decov) boosts the results compared to the Vanilla approach for all types of activations. The three variants of our approach, i.e., the within-layer approach, consistently outperform the Decov loss except for the LeakyReLU where the latter outperforms our direct variant. Table 1 shows that the logdet variant of our approach yields the best performance for all three activation types.
5.2 CLASSIFICATION
For classification, we evaluate the performance of our approach on CIFAR10 and CIFAR100 datasets (Krizhevsky et al., 2009). They contain 60,000 32 × 32 images grouped into 10 and 100 distinct categories, respectively. We train on the 50,000 given training examples and test on the 10,000 specified test samples. We hold the last 10000 of the training set for validation. For the neural network model, we use an architecture composed of 3 convolutional layers. Each convolution layer is composed of 32 3 × 3 filters followed by 2 × 2 max pooling. The flattened output of the convolutional layers is connected to a fully connected layer with 128 neurons and a softmax layer. The different additional losses, i.e., ours and Decov, are added only on top of the fully connected layer. The models are trained for 150 epochs using stochastic gradient decent with a learning rate of 0.01 and categorical cross entropy loss. For hyper-paramters tuning, we keep the model that performs best on the validation set and use it in the test phase. We experiment with three different activation functions for the hidden layers: sigmoid, Rectified Linear Units (ReLU) (Nair & Hinton, 2010), and LeakyReLU (Maas et al., 2013). All reported results are average performance over 4 trials with the standard deviation indicated alongside.
Tables 2 and 3 report the test error rates of the different approaches for both datasets. Compared to the Vanilla network, our within-layer diversity strategies consistently improve the performance of the model. For the CIFAR10, the direct variant yields more than 0.72% improvement for the ReLU and 2% improvement for the sigmoid activation. For the LeakyReLU case, the determinant variant achieves the lowest error rate. This is in accordance with the results on CIFAR100. Here, we note that our proposed approach outperforms both the Vanilla and the Decov models, especially in the sigmoid case. Compared to the Vanilla approach, we note that the model training time cost on CIFAR100 increases by 9% for the direct approach, by 36.1% for the determinant variant, and by 36.2%for the log of determinant variant.
6 CONCLUSIONS
In this paper, we proposed a new approach to encourage ‘diversification’ of the layer-wise feature map outputs in neural networks. The main motivation is that by promoting within-layer activation diversity, neurons within the same layer learn to capture mutually distinct patterns. We proposed an additional loss term that can be added on top of any fully-connected layer. This term complements
the traditional ‘between-layer’ feedback with an additional ‘within-layer’ feedback encouraging diversity of the activations. We theoretically proved that the proposed approach decreases the estimation error bound, and thus improves the generalization ability of neural networks. This analysis was further supported by experimental results showing that such a strategy can indeed improve the performance of neural networks in regression and classification tasks. Our future work includes extensive experimental analysis on the relationship between the distribution of the neurons output and generalization.
7 APPENDIX
In the following proofs, we use Lipschitz analysis. In particular, a function f : A → R, A ⊂ Rn, is said to be L-Lipschitz, if there exist a constant L ≥ 0, such that |f(a) − f(b)| ≤ L||a − b|| for every pair of points a, b ∈ A. Moreover:
• supx∈A f ≤ sup(L||x||+ f(0)). • if f is continuous and differentiable, L = sup |f ′(x)|.
7.1 PROOF OF LEMMA 3.5
Lemma 3.5. Under Assumptions 1, with a probability at least τQ, we have sup x,f |f(x)| ≤ √ J , (19)
where Q is equal to the number of neuron pairs defined by M neurons, i.e. Q = M(M−1)2 , and J = C24 ( MC25 +M(M − 1)(C25 − d2min/2) ) and C5 = LφC1C3 + φ(0).
Proof.
f2(x) = ( M∑ m=1 vmφm(x) )2 ≤ ( M∑ m=1 ||v||∞φm(x) )2 ≤ ||v||2∞ ( M∑ m=1 φm(x) )2 ≤ C24 ( M∑ m=1 φm(x) )2
= C24 (∑ m,n φm(x)φn(x) ) = C24 ∑ m φm(x) 2 + ∑ m 6=n φn(x)φm(x) (20) We have supw,x φ(x) < sup(Lφ|wTx| + φ(0)) because φ is Lφ-Lipschitz. Thus, ||φ||∞ < LφC1C3 + φ(0) = C5. For the first term in equation 20, we have ∑ m φm(x)
2 < M(LφC1C3 + φ(0))
2 = MC25 . The second term, using the identity φm(x)φn(x) = 1 2 ( φm(x) 2 + φn(x) 2 − (φm(x)− φn(x))2 ) , can be rewritten as∑
m 6=n
φm(x)φn(x) = 1
2 ∑ m 6=n φm(x) 2 + φn(x) 2 − ( φm(x)− φn(x) )2 . (21)
In addition, we have with a probability τ , ||φm(x) − φn(x)||2 ≥ dmin for m 6= n. Thus, we have with a probability at least τQ:∑
m 6=n
φm(x)φn(x) ≤ 1
2 ∑ m6=n (2C25 − d2min) =M(M − 1)(C25 − d2min/2). (22)
Here Q is equal to the number of neuron pairs defined by M neurons, i.e, Q = M(M−1)2 . By putting everything back to equation 20, we have with a probability τQ,
f2(x) ≤ C24 ( MC25 +M(M − 1)(C25 − d2min/2) ) = J . (23)
Thus, with a probability τQ,
sup x,f |f(x)| ≤ √ sup x,f f(x)2 ≤ √ J . (24)
7.2 PROOF OF LEMMA 3.6
Lemma 3.6. Under Assumptions 1, with a probability at least τQ, we have sup x,y,f |l(f(x), y)| ≤ ( √ J + C2)2 (25)
Proof. We have supx,y,f |f(x) − y| ≤ 2 supx,y,f (|f(x)| + |y|) = 2( √ J + C2). Thus supx,y,f |l(f(x), y)| ≤ ( √ J + C2)2.
7.3 PROOF OF THEOREM 3.7
Theorem 3.7. Under Assumptions 1, with probability at least τQ(1− δ), we have
L(f̂)−L(f∗) ≤ 8 (√ J +C2 )( 2LφC134+C4|φ(0)| )√M√ N +( √ J +C2)2 √ 2 log(2/δ) N (26)
where C134 = C1C3C4, J = C24 ( MC25 +M(M −1)(C25 −d2min/2) ) , and C5 = LφC1C3+φ(0).
Proof. Given that l(·) is K-Lipschitz with a constant K = supx,y,f |f(x)− y| ≤ 2( √ J +C2), and using Lemma 3.3, we can show that RN (A) ≤ KRN (F) ≤ 2( √ J + C2)RN (F). For RN (F), we use the bound found in Lemma 3.4. Using Lemmas 3.2 and 3.6 completes the proof.
7.4 PROOF OF THEOREM 3.10
Theorem 3.10. Under Assumptions 1, with probability of at least ∏P−1 p=0 (τ p)Q p (1− δ), we have
L(f̂)− L(f∗) ≤ 8( √ J + C2)
( (2Lφ)
PC1C 0 3√
N
P−1∏ p=0 √ MpCp3 + |φ(0)|√ N P−1∑ p=0 (2Lφ) P−1−p P−1∏ j=p √ M jCj3
)
+ (√ J + C2 )2√2 log(2/δ) N
(27)
where Qp is the number of neuron pairs in the pth layer, defined as Qp = M p(Mp−1)
2 , and J P is defined recursively using the following identities: J 0 = C03C1 and J p = MpCp2 ( Mp2(LφC p−1 3 J p−1 + φ(0))2 −M(M − 1)d2min/2) ) , for p = 1, . . . , P .
Proof. Lemma 5 in (Xie et al., 2015b) provides an upper-bound for the hypothesis class. We denote by vp denote the outputs of the pth hidden layer before applying the activation function:
v0 = [w0 T 1 x, ....,w 0T M0x] (28)
vp = [ Mp−1∑ j=1 wpj,1φ(v p−1 j ), ...., Mp−1∑ j=1 wpj,Mpφ(v p−1 j )] (29)
vp = [wp1 T φp, ...,wpMp T φp], (30)
where φp = [φ(vp−11 ), · · · , φ(v p−1 Mp−1)]. We have
||vp||22 = Mp∑ m=1 (wpm Tφp)2 (31)
and wpm Tφp ≤ Cp3 ∑ n φ p n. Thus,
||vp||22 ≤ Mp∑ m=1 (Cp3 ∑ n φpn) 2 =MpCp3 2 ( ∑ n φpn) 2 =MpCp3 2 ∑ mn φpmφ p n. (32)
We use the same decomposition trick of φpmφ p n as in the proof of Lemma 3.5. We need to bound supx φ p:
sup x φp < sup(Lφ|wp−1j T vp−1|+ φ(0)) < Lφ||W p−1||∞||vp−1||22 + φ(0). (33)
Thus, we have ||vp||22 ≤MpCp 2(M2(LφCp−13 ||vp−1||22 + φ(0))2 −M(M − 1)d2min/2)) = J P . (34) We found a recursive bound for ||vp||22, we note that for p = 0, we have ||v0||22 ≤ ||W 0||∞C1 ≤ C03C1 = J 0. Thus,
sup x,fP∈FP |f(x)| = sup x,fP∈FP
|vP | ≤ √ J P . (35)
7.5 PROOFS OF THEOREMS 3.11 AND 3.12
Theorem 3.11. For a multivariate regression trained with the squared error, we have with probability at least τQ(1− δ),
L(f̂)−L(f∗) ≤ 8D( √ J +C2) ( 2LφC134+C4|φ(0)| )√M√ N +D( √ J +C2)2 √ 2 log(2/δ) N (36)
where C134 = C1C3C4, J = C24 (MC25 +M(M − 1)(C25 − d2min/2) ) , and C5 = LφC1C3+φ(0).
Proof. The squared loss ||f(x) − y||2 can be decomposed into D terms (f(x)k − yk)2. Using Theorem 3.7, we can derive the bound for each term.
Theorem 3.12. For a multiclass classification task using the cross-entropy loss, we have with probability at least τQ(1− δ),
L(f̂)− L(f∗) ≤ D(D − 1) D − 1 + e−2 √ J
( 2LφC134 + C4|φ(0)| )√M√ N + log ( 1 + (D − 1)e2 √ J )√2 log(2/δ)
N (37) where C134 = C1C3C4, J = C24 (MC25 +M(M −1)(C25 −d2min/2) ) , and C5 = LφC1C3+φ(0).
Proof. Using Lemma 9 in (Xie et al., 2015b), we have supf,x,y l = log ( 1 + (D − 1)e2 √ J ) and l is D−1 D−1+e−2 √ J -Lipschitz. Thus, using the decomposition property of the Rademacher complexity, we have
Rn(A) ≤ D(D − 1)
D − 1 + e−2 √ J
( 2LφC134
√ M√
N + C4|φ(0)|
√ M√
N
) . (38) | 1. What is the main contribution of the paper, and how does it extend previous work?
2. What are the strengths and weaknesses of the proposed idea, particularly regarding its ability to improve generalization performance?
3. How does the reviewer assess the thoroughness and rigor of the experiments presented in the paper?
4. Are there any concerns or questions about the technical details of the proposal, such as the choice of regularization terms or the computation cost?
5. How does the reviewer evaluate the overall impact and novelty of the paper's content? | Review | Review
This paper proposes adding regularization terms to encourage diversity of the layer outputs in order to improve the generalization performance. The proposed idea is an extension of Cogswell's work with different regularization terms. In addition, the authors performed detailed generalization analysis based on the Rademacher complexity. The appearance of the term related to the layer output diversity in the generalization bound provides theoretical support for the proposed idea.
The main weakness of this paper, in my humble opinion, is the lack of important details or rigor in the experiments presented. For example, the authors didn't mention how the hyperparameter selection was conducted, what optimizer (and its parameters) was used, how many runs per result and the confidence interval, whether any test was done to establish statistical significance, why state-of-the-art architecture was not used for the image classification tasks, etc. Without these important details and rigorous comparison, it's hard to have high confidence in the reproducibility of the results.
Details:
Intro section. The line of work in "double descent" shows that overparameterization doesn't necessarily lead to overfitting. For completeness, it'll be good to mention this line of work and qualify the claim on overfitting.
End of section 2. The authors claim that the proposed diversity term induces "within-layer" feedback. The regularization term is computed on the outputs of a layer, which do depend on the parameters of the lower layers. So when backpropagation happens, it will affect the parameters of the lower layers. Therefore, "within-layer" feedback doesn't sound accurate to me.
Section 3.1, last bullet point. Should
τ
be introduced here? Otherwise, where does the
τ
later used in Lemma 3.5, Lemma 3.6 and Theorem 3.7 come from?
Section 5. The proposed regularization terms don't seem cheap to compute for large networks with wide layers. It'll be helpful to measure the training cost increase. |
ICLR | Title
ON NEURAL NETWORK GENERALIZATION VIA PROMOTING WITHIN-LAYER ACTIVATION DIVERSITY
Abstract
During the last decade, neural networks have been intensively used to tackle various problems and they have often led to state-of-the-art results. These networks are composed of multiple jointly optimized layers arranged in a hierarchical structure. At each layer, the aim is to learn to extract hidden patterns needed to solve the problem at hand and forward it to the next layers. In the standard form, a neural network is trained with gradient-based optimization, where the errors are back-propagated from the last layer back to the first one. Thus at each optimization step, neurons at a given layer receive feedback from neurons belonging to higher layers of the hierarchy. In this paper, we propose to complement this traditional ’between-layer’ feedback with additional ’within-layer’ feedback to encourage diversity of the activations within the same layer. To this end, we measure the pairwise similarity between the outputs of the neurons and use it to model the layer’s overall diversity. By penalizing similarities and promoting diversity, we encourage each neuron to learn a distinctive representation and, thus, to enrich the data representation learned within the layer and to increase the total capacity of the model. We theoretically study how the within-layer activation diversity affects the generalization performance of a neural network in a supervised context and we prove that increasing the diversity of hidden activations reduces the estimation error. In addition to the theoretical guarantees, we present an empirical study confirming that the proposed approach enhances the performance of neural networks.
1 INTRODUCTION
Neural networks are a powerful class of non-linear function approximators that have been successfully used to tackle a wide range of problems. They have enabled breakthroughs in many tasks, such as image classification (Krizhevsky et al., 2012), speech recognition (Hinton et al., 2012a), and anomaly detection (Golan & El-Yaniv, 2018). Formally, the output of a neural network consisting of P layers can be defined as follows:
f(x;W) = φP (W P (φP−1(· · ·φ2(W 2φ1(W 1x)))), (1) where φi(.) is the element-wise activation function, e.g., ReLU and Sigmoid, of the ith layer and W = {W 1, . . . ,W P } are the corresponding weights of the network. The parameters of f(x;W) are optimized by minimizing the empirical loss:
L̂(f) = 1
N N∑ i=1 l ( f(xi;W), yi ) , (2)
where l(·) is the loss function, and {xi, yi}Ni=1 are the training samples and their associated groundtruth labels. The loss is minimized using the gradient decent-based optimization coupled with backpropagation.
However, neural networks are often over-parameterized, i.e., have more parameters than data. As a result, they tend to overfit to the training samples and not generalize well on unseen examples (Goodfellow et al., 2016). While research on Double descent (Belkin et al., 2019; Advani et al., 2020; Nakkiran et al., 2020) shows that over-parameterization does not necessarily lead to overfitting, avoiding overfitting has been extensively studied (Neyshabur et al., 2018; Nagarajan & Kolter,
2019; Poggio et al., 2017) and various approaches and strategies have been proposed, such as data augmentation (Goodfellow et al., 2016), regularization (Kukačka et al., 2017; Bietti et al., 2019; Arora et al., 2019), and dropout (Hinton et al., 2012b; Wang et al., 2019; Lee et al., 2019; Li et al., 2016), to close the gap between the empirical loss and the expected loss.
Diversity of learners is widely known to be important in ensemble learning (Li et al., 2012; Yu et al., 2011) and, particularly in deep learning context, diversity of information extracted by the network neurons has been recognized as a viable way to improve generalization (Xie et al., 2017a; 2015b). In most cases, these efforts have focused on making the set of weights more diverse (Yang et al.; Malkin & Bilmes, 2009). However, diversity of the activation has not received much attention.
Inspired by the motivation of dropout to co-adapt neuron activation, Cogswell et al. (2016) proposed to regularize the activations of the network. An additional loss using cross-covariance of hidden activations was proposed, which encourages the neurons to learn diverse or non-redundant representations. The proposed approach, known as Decov, has empirically been proven to alleviate overfitting and to improve the generalization ability of neural network, yet a theoretical analysis to prove this has so far been lacking.
In this work, we propose a novel approach to encourage activation diversity within the same layer. We propose complementing ’between-layer’ feedback with additional ’within-layer’ feedback to penalize similarities between neurons on the same layer. Thus, we encourage each neuron to learn a distinctive representation and to enrich the data representation learned within each layer. Moreover, inspired by Xie et al. (2015b), we provide a theoretical analysis showing that the within-layer activation diversity boosts the generalization performance of neural networks and reduces overfitting.
Our contributions in this paper are as follows:
• Methodologically, we propose a new approach to encourage the ’diversification’ of the layer-wise feature maps’ outputs in neural networks. The proposed approach has three variants based on how the global diversity is defined. The main intuition is that by promoting the within-layer activation diversity, neurons within the same layer learn distinct patterns and, thus, increase the overall capacity of the model.
• Theoretically, we analyse the effect the within-layer activation diversity on the generalization error bound of neural network. The analysis is presented in Section 3. As shown in Theorems 3.7, 3.8, 3.9, 3.10, 3.11, and 3.12, we express the upper-bound of the estimation error as a function of the diversity factor. Thus, we provide theoretical evidence that the within-layer activation diversity can help reduce the generalization error.
• Empirically, we show that the within-layer activation diversity boosts the performance of neural networks. Experimental results show that the proposed approach outperforms the competing methods.
2 WITHIN-LAYER ACTIVATION DIVERSITY
We propose a diversification strategy, where we encourage neurons within a layer to activate in a mutually different manner, i.e., to capture different patterns. To this end, we propose an additional within-layer loss which penalizes the neurons that activate similarly. The loss function L̂(f) defined in equation 2 is augmented as follows:
L̂aug(f) = L̂(f) + λ P∑ i=1 J i, (3)
where J i expresses the overall pair-wise similarity of the neurons within the ith layer and λ is the penalty coefficient for the diversity loss. As in (Cogswell et al., 2016), our proposed diversity loss can be applied to a single layer or multiple layers in a network. For simplicity, let us focus on a single layer.
Let φin(xj) and φ i m(xj) be the outputs of the n th and mth neurons in the ith layer for the same input sample xj . The similarity snm between the the nth and mth neurons can be obtained as the average similarity measure of their outputs for N input samples. We use the radial basis function to
express the similarity:
snm = 1
N N∑ j=1 exp ( − γ||φin(xj)− φim(xj)||2 ) , (4)
where γ is a hyper-parameter. The similarity snm can be computed over the whole dataset or batchwise. Intuitively, if two neurons n andm have similar outputs for many samples, their corresponding similarity snm will be high. Otherwise, their similarity smn is small and they are considered “diverse”. Based on these pair-wise similarities, we propose three variants for the global diversity loss J i of the ith layer:
• Direct: J i = ∑ n 6=m snm. In this variant, we model the global layer similarity directly
as the sum of the pairwise similarities between the neurons. By minimizing their sum, we encourage the neurons to learn different representations.
• Det: J i = −det(S), where S is a similarity matrix defined as Snm = snm. This variant is inspired by the Determinantal Point Process (DPP) (Kulesza & Taskar, 2010; 2012), as the determinant of S measures the global diversity of the set. Geometrically, det(S) is the volume of the parallelepiped formed by vectors in the feature space associated with s. Vectors that result in a larger volume are considered to be more “diverse”. Thus, maximizing det(·) (minimizing −det(·)) encourages the diversity of the learned features.
• Logdet: J i = −logdet(S)1. This variant has the same motivation as the second one. We use logdet instead of det as logdet is a convex function over the positive definite matrix space.
It should be noted here that the first proposed variant, i.e., direct, similar to Decov (Cogswell et al., 2016), captures only the pairwise diversity between components and is unable to capture the higherorder “diversity”, whereas the other two variants consider the global similarity and are able to measure diversity in a more global manner.
Our newly proposed loss function defined in equation 3 has two terms. The first term is the classic loss function. It computes the loss with respect to the ground-truth. In the back-propagation, this feedback is back-propagated from the last layer to the first layer of the network. Thus, it can be considered as a between-layer feedback, whereas the second term is computed within a layer. From equation 3, we can see that our proposed approach can be interpreted as a regularization scheme. However, regularization in deep learning is usually applied directly on the parameters, i.e., weights (Goodfellow et al., 2016; Kukačka et al., 2017), while in our approach, similar to (Cogswell et al., 2016), an additional term is defined over the output maps of the layers. For a layer with C neurons and a batch size of N , the additional computational cost is O(C2(N + 1)) for direct variant and O(C3 + C2N)) for both the determinant and log of the determinant variants.
3 GENERALIZATION ERROR ANALYSIS
In this section, we analyze how the proposed within-layer diversity regularizer affects the generalization error of a neural network. Generalization theory (Zhang et al., 2017; Kawaguchi et al., 2017) focuses on the relation between the empirical loss, as defined in equation 2, and the expected risk defined as follows:
L(f) = E(x,y)∼Q[l(f(x), y)], (5)
where Q is the underlying distribution of the dataset. Let f∗ = argminf L(f) be the expected risk minimizer and f̂ = argminf L̂(f) be the empirical risk minimizer. We are interested in the estimation error, i.e., L(f∗)−L(f̂), defined as the gap in the loss between both minimizers (Barron, 1994). The estimation error represents how well an algorithm can learn. It usually depends on the complexity of the hypothesis class and the number of training samples (Barron, 1993; Zhai & Wang, 2018).
1This is defined only if S is positive definite. It can be shown that in our case S is positive semi-definite. Thus, in practice we use a regularized version (S + I) to ensure the positive definiteness.
Several techniques have been used to quantify the estimation error, such as PAC learning (Hanneke, 2016; Arora et al., 2018), VC dimension (Sontag, 1998; Harvey et al., 2017; Bartlett et al., 2019), and the Rademacher complexity (Xie et al., 2015b; Zhai & Wang, 2018; Tang et al., 2020). The Rademacher complexity has been widely used as it usually leads to a tighter generalization error bound (Sokolic et al., 2016; Neyshabur et al., 2018; Golowich et al., 2018). The formal definition of the empirical Rademacher complexity is given as follows: Definition 3.1. (Bartlett & Mendelson, 2002) For a given dataset with N samples D = {xi, yi}Ni=1 generated by a distribution Q and for a model space F : X → R with a single dimensional output, the empirical Rademacher complexityRN (F) of the set F is defined as follows:
RN (F) = Eσ [ sup f∈F 1 N N∑ i=1 σif(xi) ] , (6)
where the Rademacher variables σ = {σ1, · · · , σN} are independent uniform random variables in {−1, 1}.
In this work, we analyse the estimation error bound of a neural network using the Rademacher complexity and we are interested in the effect of the within-layer diversity on the estimation error. In order to study this effect, inspired by (Xie et al., 2015b), we assume that with a high probability τ, the distance between the output of each pair of neurons, (φn(x)−φm(x))2, is lower bounded by dmin for any input x. Note that this condition can be expressed in terms of the similarity s defined in equation 4: snm ≤ e(−γdmin) = smin for any two distinct neurons with the probability τ . Our analysis starts with the following lemma: Lemma 3.2. (Xie et al., 2015b; Bartlett & Mendelson, 2002) With a probability of at least 1− δ
L(f̂)− L(f∗) ≤ 4RN (A) +B √ 2 log(2/δ)
N (7)
for B ≥ supx,y,f |l(f(x), y)|, whereRN (A) is the Rademacher complexity of the loss set A.
It upper-bounds the estimation error using the Rademacher complexity defined over the loss set and supx,y,f |l(f(x), y)|. Our analysis continues by seeking a tighter upper bound of this error and showing how the within-layer diversity, expressed with dmin, affects this upper bound. We start by deriving such an upper-bound for a simple network with one hidden layer trained for a regression task and then we extend it for a general multi-layer network and for different losses.
3.1 SINGLE HIDDEN-LAYER NETWORK
Here, we consider a simple neural network with one hidden-layer with M neurons and onedimensional output trained for a regression task. The full characterization of the setup can be summarized in the following assumptions: Assumptions 1.
• The activation function of the hidden layer, φ(t), is a Lφ-Lipschitz continuous function.
• The input vector x ∈ RD satisfies ||x||2 ≤ C1.
• The output scalar y ∈ R satisfies |y| ≤ C2.
• The weight matrix W = [w1,w2, · · · ,wM ] ∈ RD×M connecting the input to the hidden layer satisfies ||wm||2 ≤ C3.
• The weight vector v ∈ RM connecting the hidden-layer to the output neuron satisfies ||v||2 ≤ C4.
• The hypothesis class is F = {f |f(x) = ∑M m=1 vmφm(x) = ∑M m=1 vmφ(w T mx)}.
• Loss function set is A = {l|l(f(x), y) = 12 |f(x)− y| 2}.
• With a probability τ , for n 6= m, ||φn(x)− φm(x)||22 = ||φ(wTnx)− φ(wTmx)||22 ≥ dmin.
We recall the following two lemmas related to the estimation error and the Rademacher complexity: Lemma 3.3. (Bartlett & Mendelson, 2002) For F ∈ RX , assume that g : R −→ R is a Lg-Lipschitz continuous function and A = {g ◦ f : f ∈ F}. Then we have
RN (A) ≤ LgRN (F). (8) Lemma 3.4. (Xie et al., 2015b) Under Assumptions 1, the Rademacher complexity RN (F) of the hypothesis class F = {f |f(x) = ∑M m=1 vmφm(x) = ∑M m=1 vmφ(w T mx)} can be upper-bounded as follows:
RN (F) ≤ 2LφC134 √ M√
N + C4|φ(0)|
√ M√
N , (9)
where C134 = C1C3C4 and φ(0) is the output of the activation function at the origin.
Lemma 3.4 provides an upper-bound of the Rademacher complexity for the hypothesis class. In order to find an upper-bound for our estimation error, we start by deriving an upper bound for supx,f |f(x)|: Lemma 3.5. Under Assumptions 1, with a probability at least τQ, we have
sup x,f |f(x)| ≤
√ J , (10)
where Q is equal to the number of neuron pairs defined by M neurons, i.e., Q = M(M−1)2 , and J = C24 ( MC25 +M(M − 1)(C25 − d2min/2) ) and C5 = LφC1C3 + φ(0),
The proof can be found in Appendix 7.1. Note that in Lemma 3.5, we have expressed the upperbound of supx,f |f(x)| in terms of dmin. Using this bound, we can now find an upper-bound for supx,f,y |l(f(x), y)| in the following lemma: Lemma 3.6. Under Assumptions 1, with a probability at least τQ, we have
sup x,y,f
|l(f(x), y)| ≤ ( √ J + C2)2. (11)
The proof can be found in Appendix 7.2. The main goal is to analyze the estimation error bound of the neural network and to see how its upper-bound is linked to the diversity, expressed by dmin, of the different neurons. The main result is presented in Theorem 3.7. Theorem 3.7. Under Assumptions 1, with probability at least τQ(1− δ), we have
L(f̂)−L(f∗) ≤ 8 (√ J +C2 )( 2LφC134+C4|φ(0)| )√M√ N +( √ J +C2)2 √ 2 log(2/δ) N (12)
where C134 = C1C3C4, J = C24 ( MC25 +M(M −1)(C25 −d2min/2) ) , and C5 = LφC1C3+φ(0).
The proof can be found in Appendix 7.3. Theorem 3.7 provides an upper-bound for the estimation error. We note that it is a decreasing function of dmin. Thus, we say that a higher dmin, i.e., more diverse activations, yields a lower estimation error bound. In other words, by promoting the withinlayer diversity, we can reduce the generalization error of neural networks. It should be also noted that our Theorem 3.7 has a similar form to Theorem 1 in (Xie et al., 2015b). However, the main difference is that Xie et al. analyse the estimation error with respect to the diversity of the weight vectors. Here, we consider the diversity between the outputs of the activations of the hidden neurons.
3.2 BINARY CLASSIFICATION
We now extend our analysis of the effect of the within-layer diversity on the generalization error in the case of a binary classification task, i.e., y ∈ {−1, 1}. The extensions of Theorem 3.7 in the case of a hinge loss and a logistic loss are presented in Theorems 3.8 and 3.9, respectively. Theorem 3.8. Using the hinge loss, we have with probability at least τQ(1− δ)
L(f̂)− L(f∗) ≤ 4 ( 2LφC134 + C4|φ(0)| )√M√ N + (1 + √ J ) √ 2 log(2/δ) N (13)
where C134 = C1C3C4, J = C24 (MC25 +M(M − 1)(C25 − d2min/2) ) , and C5 = LφC1C3+φ(0).
Theorem 3.9. Using the logistic loss l(f(x), y) = log(1 + e−yf(x)), we have with probability at least τQ(1− δ)
L(f̂)− L(f∗) ≤ 4 1 + e √ −J
( 2LφC134 + C4|φ(0)| )√M√ N + log(1 + e √ J ) √ 2 log(2/δ) N (14)
where C134 = C1C3C4, J = C24 (MC25 +M(M − 1)(C25 − d2min/2) ) , and C5 = LφC1C3+φ(0).
The proofs are similar to Lemmas 7 and 8 in (Xie et al., 2015b). As we can see, for the classification task, the error bounds of the estimation error for the hinge and logistic losses are decreasing with respect to dmin. Thus, employing a diversity strategy can improve the generalization also for the binary classification task.
3.3 MULTI-LAYER NETWORKS
Here, we extend our result for networks with P (> 1) hidden layers. We assume that the pair-wise distances between the activations within layer p are lower-bounded by dpmin with a probability τ
p. In this case, the hypothesis class can be defined recursively. In addition, we replace the fourth assumption in Assumptions 1 with: ||W p||∞ ≤ Cp3 for every W p, i.e., the weight matrix of the p-th layer. In this case, the main theorem is extended as follows:
Theorem 3.10. With probability of at least ∏P−1 p=0 (τ p)Q p (1− δ), we have
L(f̂)− L(f∗) ≤ 8( √ J + C2)
( (2Lφ)
PC1C 0 3√
N
P−1∏ p=0 √ MpCp3 + |φ(0)|√ N P−1∑ p=0 (2Lφ) P−1−p P−1∏ j=p √ M jCj3
)
+ (√ J + C2 )2√2 log(2/δ) N
(15)
where Qp is the number of neuron pairs in the pth layer, defined as Qp = M p(Mp−1)
2 , and J P is defined recursively using the following identities: J 0 = C03C1 and J p = MpCp2 ( Mp2(LφC p−1 3 J p−1 + φ(0))2 −M(M − 1) dpmin 2 2 ) ) , for p = 1, . . . , P .
The proof can be found in Appendix 7.4. In Theorem 3.10, we see thatJ P is decreasing with respect to dpmin. Thus, we see that maximizing the within-layer diversity, we can reduce the estimation error of a multi-layer neural network.
3.4 MULTIPLE OUTPUTS
Finally, we consider the case of a neural network with a multi-dimensional output, i.e., y ∈ RD. In this case, we can extend Theorem 3.7 by decomposing the problem into D smaller problems and deriving the global error bound as the sum of the small D bounds. This yields the following two theorems: Theorem 3.11. For a multivariate regression trained with the squared error, we have with probability at least τQ(1− δ),
L(f̂)−L(f∗) ≤ 8D( √ J +C2) ( 2LφC134+C4|φ(0)| )√M√ N +D( √ J +C2)2 √ 2 log(2/δ) N (16)
where C134 = C1C3C4, J = C24 (MC25 +M(M − 1)(C25 − d2min/2) )
and C5 = LφC1C3 + φ(0). Theorem 3.12. For a multi-class classification task using the cross-entropy loss, we have with probability at least τQ(1− δ),
L(f̂)− L(f∗) ≤ D(D − 1) D − 1 + e−2 √ J
( 2LφC134 + C4|φ(0)| )√M√ N + log ( 1 + (D − 1)e2 √ J )√2 log(2/δ)
N (17) where C134 = C1C3C4, J = C24 (MC25 +M(M − 1)(C25 − d2min/2) ) and C5 = LφC1C3+φ(0).
The proofs can be found in Appendix 7.5. Theorems 3.11 and 3.12 extend our result for the multidimensional regression and classification tasks, respectively. Both bounds are inversely proportional to the diversity factor dmin. We note that for the classification task, the upper-bound is exponentially decreasing with respect to dmin.
4 RELATED WORK
Diversity promoting strategies have been widely used in ensemble learning (Li et al., 2012; Yu et al., 2011), sampling (Derezinski et al., 2019; Bıyık et al., 2019; Gartrell et al., 2019), ranking (Yang et al.; Gan et al., 2020), and pruning by reducing redundancy (Kondo & Yamauchi, 2014; He et al., 2019; Singh et al., 2020; Lee et al., 2020). In the deep learning context, various approaches have used diversity as a direct regularizer on top of the weight parameters. Here, we present a brief overview of these regularizers. Based on the way diversity is defined, we can group these approaches into two categories. The first group considers the regularizers that are based on the pairwise dissimilarity of components, i.e., the overall set of weights are diverse if every pair of weights are dissimilar. Given the weight vectors {wm}Mm=1, Yu et al. (2011) define the regularizer as∑ mn(1− θmn), where θmn represents the cosine similarity betweenwm andwn. Bao et al. (2013)
proposed an incoherence score defined as − log (
1 M(M−1) ∑ mn β|θmn| 1 β ) , where β is a positive
hyperparameter. Xie et al. (2015a; 2016) used mean(θmn) − var(θmn) to regularize Boltzmann machines. They theoretically analyzed its effect on the generalization error bounds in (Xie et al., 2015b) and extended it to kernel space in (Xie et al., 2017a). The second group of regularizers considers a more globalist view of diversity. For example, in (Malkin & Bilmes, 2009; 2008; Xie et al., 2017b), a weight regularization based on the determinant of the weights covariance is proposed and based on determinantal point process in (Kulesza & Taskar, 2012; Kwok & Adams, 2012).
Unlike the aforementioned methods which promote diversity on the weight level and similar to our method, Cogswell et al. (2016) proposed to enforce dissimilarity on the feature map outputs, i.e., on the activations. To this end, they proposed an additional loss based on the pairwise covariance of the activation outputs. Their additional loss, LDecov is defined as the squared sum of the non-diagonal elements of the global covariance matrix C:
LDecov = 1
2 (||C||2F − ||diag(C)||22), (18)
where ||.||F is the Frobenius norm. Their approach, Decov, yielded superior empirical performance; however, it lacks theoretical proof. In this paper, we closed this gap and we showed theoretically how employing a diversity strategy on the network activations can indeed decrease the estimation error bound and improve the generalization of the model. Besides, we proposed variants of our approach which consider a global view of diversity.
5 EXPERIMENTAL RESULTS
In this section, we present an empirical study of our approach in a regression context using Boston Housing price dataset (Dua & Graff, 2017) and in a classification context using CIFAR10 and CIFAR100 datasets (Krizhevsky et al., 2009). We denote as Vanilla the model trained with no diversity protocol and as Decov the approach proposed in (Cogswell et al., 2016).
5.1 REGRESSION
For regression, we use the Boston Housing price dataset (Dua & Graff, 2017). It has 404 training samples and 102 test samples with 13 attributes each. We hold the last 100 sample of training as a validation set for the hyper-parameter tuning. The loss weight, is chosen from {0.00001, 0.00005, 0.0001, 0.0005, 0.001, 0.005} for both our approach and Decov (Cogswell et al., 2016). Parameter γ in the radial basis function is chosen from {0.00001, 0.0001, 0.01, 0.1.1, 10, 100}. As a base model, we use a neural network composed of two fully connected hidden layers, each with 64 neurons. The additional loss is applied on top of both hidden layers.
We train for 80 epochs using stochastic gradient descent with a learning rate of 0.01 and mean square error loss. For hyperparamrter tuning, we keep the model that perform best on the validation and use it in the test phase. We experiment with three different activation functions for the hidden layers: Sigmoid, Rectified Linear Units (ReLU) (Nair & Hinton, 2010), and LeakyReLU (Maas et al., 2013).
Table 1 reports the results in terms of the mean average error for the different approaches over the Boston Housing price dataset. First, we note that employing a diversification strategy (ours and Decov) boosts the results compared to the Vanilla approach for all types of activations. The three variants of our approach, i.e., the within-layer approach, consistently outperform the Decov loss except for the LeakyReLU where the latter outperforms our direct variant. Table 1 shows that the logdet variant of our approach yields the best performance for all three activation types.
5.2 CLASSIFICATION
For classification, we evaluate the performance of our approach on CIFAR10 and CIFAR100 datasets (Krizhevsky et al., 2009). They contain 60,000 32 × 32 images grouped into 10 and 100 distinct categories, respectively. We train on the 50,000 given training examples and test on the 10,000 specified test samples. We hold the last 10000 of the training set for validation. For the neural network model, we use an architecture composed of 3 convolutional layers. Each convolution layer is composed of 32 3 × 3 filters followed by 2 × 2 max pooling. The flattened output of the convolutional layers is connected to a fully connected layer with 128 neurons and a softmax layer. The different additional losses, i.e., ours and Decov, are added only on top of the fully connected layer. The models are trained for 150 epochs using stochastic gradient decent with a learning rate of 0.01 and categorical cross entropy loss. For hyper-paramters tuning, we keep the model that performs best on the validation set and use it in the test phase. We experiment with three different activation functions for the hidden layers: sigmoid, Rectified Linear Units (ReLU) (Nair & Hinton, 2010), and LeakyReLU (Maas et al., 2013). All reported results are average performance over 4 trials with the standard deviation indicated alongside.
Tables 2 and 3 report the test error rates of the different approaches for both datasets. Compared to the Vanilla network, our within-layer diversity strategies consistently improve the performance of the model. For the CIFAR10, the direct variant yields more than 0.72% improvement for the ReLU and 2% improvement for the sigmoid activation. For the LeakyReLU case, the determinant variant achieves the lowest error rate. This is in accordance with the results on CIFAR100. Here, we note that our proposed approach outperforms both the Vanilla and the Decov models, especially in the sigmoid case. Compared to the Vanilla approach, we note that the model training time cost on CIFAR100 increases by 9% for the direct approach, by 36.1% for the determinant variant, and by 36.2%for the log of determinant variant.
6 CONCLUSIONS
In this paper, we proposed a new approach to encourage ‘diversification’ of the layer-wise feature map outputs in neural networks. The main motivation is that by promoting within-layer activation diversity, neurons within the same layer learn to capture mutually distinct patterns. We proposed an additional loss term that can be added on top of any fully-connected layer. This term complements
the traditional ‘between-layer’ feedback with an additional ‘within-layer’ feedback encouraging diversity of the activations. We theoretically proved that the proposed approach decreases the estimation error bound, and thus improves the generalization ability of neural networks. This analysis was further supported by experimental results showing that such a strategy can indeed improve the performance of neural networks in regression and classification tasks. Our future work includes extensive experimental analysis on the relationship between the distribution of the neurons output and generalization.
7 APPENDIX
In the following proofs, we use Lipschitz analysis. In particular, a function f : A → R, A ⊂ Rn, is said to be L-Lipschitz, if there exist a constant L ≥ 0, such that |f(a) − f(b)| ≤ L||a − b|| for every pair of points a, b ∈ A. Moreover:
• supx∈A f ≤ sup(L||x||+ f(0)). • if f is continuous and differentiable, L = sup |f ′(x)|.
7.1 PROOF OF LEMMA 3.5
Lemma 3.5. Under Assumptions 1, with a probability at least τQ, we have sup x,f |f(x)| ≤ √ J , (19)
where Q is equal to the number of neuron pairs defined by M neurons, i.e. Q = M(M−1)2 , and J = C24 ( MC25 +M(M − 1)(C25 − d2min/2) ) and C5 = LφC1C3 + φ(0).
Proof.
f2(x) = ( M∑ m=1 vmφm(x) )2 ≤ ( M∑ m=1 ||v||∞φm(x) )2 ≤ ||v||2∞ ( M∑ m=1 φm(x) )2 ≤ C24 ( M∑ m=1 φm(x) )2
= C24 (∑ m,n φm(x)φn(x) ) = C24 ∑ m φm(x) 2 + ∑ m 6=n φn(x)φm(x) (20) We have supw,x φ(x) < sup(Lφ|wTx| + φ(0)) because φ is Lφ-Lipschitz. Thus, ||φ||∞ < LφC1C3 + φ(0) = C5. For the first term in equation 20, we have ∑ m φm(x)
2 < M(LφC1C3 + φ(0))
2 = MC25 . The second term, using the identity φm(x)φn(x) = 1 2 ( φm(x) 2 + φn(x) 2 − (φm(x)− φn(x))2 ) , can be rewritten as∑
m 6=n
φm(x)φn(x) = 1
2 ∑ m 6=n φm(x) 2 + φn(x) 2 − ( φm(x)− φn(x) )2 . (21)
In addition, we have with a probability τ , ||φm(x) − φn(x)||2 ≥ dmin for m 6= n. Thus, we have with a probability at least τQ:∑
m 6=n
φm(x)φn(x) ≤ 1
2 ∑ m6=n (2C25 − d2min) =M(M − 1)(C25 − d2min/2). (22)
Here Q is equal to the number of neuron pairs defined by M neurons, i.e, Q = M(M−1)2 . By putting everything back to equation 20, we have with a probability τQ,
f2(x) ≤ C24 ( MC25 +M(M − 1)(C25 − d2min/2) ) = J . (23)
Thus, with a probability τQ,
sup x,f |f(x)| ≤ √ sup x,f f(x)2 ≤ √ J . (24)
7.2 PROOF OF LEMMA 3.6
Lemma 3.6. Under Assumptions 1, with a probability at least τQ, we have sup x,y,f |l(f(x), y)| ≤ ( √ J + C2)2 (25)
Proof. We have supx,y,f |f(x) − y| ≤ 2 supx,y,f (|f(x)| + |y|) = 2( √ J + C2). Thus supx,y,f |l(f(x), y)| ≤ ( √ J + C2)2.
7.3 PROOF OF THEOREM 3.7
Theorem 3.7. Under Assumptions 1, with probability at least τQ(1− δ), we have
L(f̂)−L(f∗) ≤ 8 (√ J +C2 )( 2LφC134+C4|φ(0)| )√M√ N +( √ J +C2)2 √ 2 log(2/δ) N (26)
where C134 = C1C3C4, J = C24 ( MC25 +M(M −1)(C25 −d2min/2) ) , and C5 = LφC1C3+φ(0).
Proof. Given that l(·) is K-Lipschitz with a constant K = supx,y,f |f(x)− y| ≤ 2( √ J +C2), and using Lemma 3.3, we can show that RN (A) ≤ KRN (F) ≤ 2( √ J + C2)RN (F). For RN (F), we use the bound found in Lemma 3.4. Using Lemmas 3.2 and 3.6 completes the proof.
7.4 PROOF OF THEOREM 3.10
Theorem 3.10. Under Assumptions 1, with probability of at least ∏P−1 p=0 (τ p)Q p (1− δ), we have
L(f̂)− L(f∗) ≤ 8( √ J + C2)
( (2Lφ)
PC1C 0 3√
N
P−1∏ p=0 √ MpCp3 + |φ(0)|√ N P−1∑ p=0 (2Lφ) P−1−p P−1∏ j=p √ M jCj3
)
+ (√ J + C2 )2√2 log(2/δ) N
(27)
where Qp is the number of neuron pairs in the pth layer, defined as Qp = M p(Mp−1)
2 , and J P is defined recursively using the following identities: J 0 = C03C1 and J p = MpCp2 ( Mp2(LφC p−1 3 J p−1 + φ(0))2 −M(M − 1)d2min/2) ) , for p = 1, . . . , P .
Proof. Lemma 5 in (Xie et al., 2015b) provides an upper-bound for the hypothesis class. We denote by vp denote the outputs of the pth hidden layer before applying the activation function:
v0 = [w0 T 1 x, ....,w 0T M0x] (28)
vp = [ Mp−1∑ j=1 wpj,1φ(v p−1 j ), ...., Mp−1∑ j=1 wpj,Mpφ(v p−1 j )] (29)
vp = [wp1 T φp, ...,wpMp T φp], (30)
where φp = [φ(vp−11 ), · · · , φ(v p−1 Mp−1)]. We have
||vp||22 = Mp∑ m=1 (wpm Tφp)2 (31)
and wpm Tφp ≤ Cp3 ∑ n φ p n. Thus,
||vp||22 ≤ Mp∑ m=1 (Cp3 ∑ n φpn) 2 =MpCp3 2 ( ∑ n φpn) 2 =MpCp3 2 ∑ mn φpmφ p n. (32)
We use the same decomposition trick of φpmφ p n as in the proof of Lemma 3.5. We need to bound supx φ p:
sup x φp < sup(Lφ|wp−1j T vp−1|+ φ(0)) < Lφ||W p−1||∞||vp−1||22 + φ(0). (33)
Thus, we have ||vp||22 ≤MpCp 2(M2(LφCp−13 ||vp−1||22 + φ(0))2 −M(M − 1)d2min/2)) = J P . (34) We found a recursive bound for ||vp||22, we note that for p = 0, we have ||v0||22 ≤ ||W 0||∞C1 ≤ C03C1 = J 0. Thus,
sup x,fP∈FP |f(x)| = sup x,fP∈FP
|vP | ≤ √ J P . (35)
7.5 PROOFS OF THEOREMS 3.11 AND 3.12
Theorem 3.11. For a multivariate regression trained with the squared error, we have with probability at least τQ(1− δ),
L(f̂)−L(f∗) ≤ 8D( √ J +C2) ( 2LφC134+C4|φ(0)| )√M√ N +D( √ J +C2)2 √ 2 log(2/δ) N (36)
where C134 = C1C3C4, J = C24 (MC25 +M(M − 1)(C25 − d2min/2) ) , and C5 = LφC1C3+φ(0).
Proof. The squared loss ||f(x) − y||2 can be decomposed into D terms (f(x)k − yk)2. Using Theorem 3.7, we can derive the bound for each term.
Theorem 3.12. For a multiclass classification task using the cross-entropy loss, we have with probability at least τQ(1− δ),
L(f̂)− L(f∗) ≤ D(D − 1) D − 1 + e−2 √ J
( 2LφC134 + C4|φ(0)| )√M√ N + log ( 1 + (D − 1)e2 √ J )√2 log(2/δ)
N (37) where C134 = C1C3C4, J = C24 (MC25 +M(M −1)(C25 −d2min/2) ) , and C5 = LφC1C3+φ(0).
Proof. Using Lemma 9 in (Xie et al., 2015b), we have supf,x,y l = log ( 1 + (D − 1)e2 √ J ) and l is D−1 D−1+e−2 √ J -Lipschitz. Thus, using the decomposition property of the Rademacher complexity, we have
Rn(A) ≤ D(D − 1)
D − 1 + e−2 √ J
( 2LφC134
√ M√
N + C4|φ(0)|
√ M√
N
) . (38) | 1. What are the main contributions and findings of the paper regarding diversifying outputs of neurons?
2. How does the reviewer assess the practicality of the proposed approaches?
3. What are some potential factors that can be controlled to tighten the generalization bound, according to the reviewer?
4. What is the significance of the constants C_4 and C_5 in the generalization bound, and how do they compare in terms of decay rates?
5. Can you provide examples of activation functions or weight vector regularizations that could help control C_5? | Review | Review
The paper proposed three ways of diversifying outputs of neurons, and the analysis showed that the generalisation bound becomes tighter when the neurons become more diversified. It is an interesting finding, along with theoretical results and empirical results. Although, from a practical perspective, there are still many concerns.
It is clear that by increasing d_min, the generalisation bound gets tighter. However, it is also obvious that there are other factors that one can control to make the bound tighter, and regularising other factors might be simpler in terms of implementation and optimisation.
The constant C_4 in the upper bound of the weight vector connecting the hidden-layer to the output neuron. \sqrt(J) decays linearly with C_4, and the first term in the generalisation bound for regression tasks decays quadratically w.r.t. C_4. Compared with a linear decay w.r.t. d_min, C_4 seems to be a better option to regularise neural networks. In practice, one can empose an \ell_2 regularisation on the top linear layer to control the overall norm of the weight matrix so that C_4 is controlled.
The constant C_5 = L_\phi C_1 C_3 + \phi(0). As we can see in the generalisation bound for regression tasks, \sqrt(J) decays quadratically w.r.t. C_5, which is even faster than the decay rate w.r.t. C_4. To control C_5, one can choose an activation function that has a small L_\phi, or to control the weight vectors to the activation function to have a small norm C_3. Both of them can be done relatively easily compared to optimising pair-wise similarity.
Overall, I think there are other regularisations suggested by the bound that could be put into practice, which might also lead to good generalisation, and also simpler optimisation problem. |
ICLR | Title
ON NEURAL NETWORK GENERALIZATION VIA PROMOTING WITHIN-LAYER ACTIVATION DIVERSITY
Abstract
During the last decade, neural networks have been intensively used to tackle various problems and they have often led to state-of-the-art results. These networks are composed of multiple jointly optimized layers arranged in a hierarchical structure. At each layer, the aim is to learn to extract hidden patterns needed to solve the problem at hand and forward it to the next layers. In the standard form, a neural network is trained with gradient-based optimization, where the errors are back-propagated from the last layer back to the first one. Thus at each optimization step, neurons at a given layer receive feedback from neurons belonging to higher layers of the hierarchy. In this paper, we propose to complement this traditional ’between-layer’ feedback with additional ’within-layer’ feedback to encourage diversity of the activations within the same layer. To this end, we measure the pairwise similarity between the outputs of the neurons and use it to model the layer’s overall diversity. By penalizing similarities and promoting diversity, we encourage each neuron to learn a distinctive representation and, thus, to enrich the data representation learned within the layer and to increase the total capacity of the model. We theoretically study how the within-layer activation diversity affects the generalization performance of a neural network in a supervised context and we prove that increasing the diversity of hidden activations reduces the estimation error. In addition to the theoretical guarantees, we present an empirical study confirming that the proposed approach enhances the performance of neural networks.
1 INTRODUCTION
Neural networks are a powerful class of non-linear function approximators that have been successfully used to tackle a wide range of problems. They have enabled breakthroughs in many tasks, such as image classification (Krizhevsky et al., 2012), speech recognition (Hinton et al., 2012a), and anomaly detection (Golan & El-Yaniv, 2018). Formally, the output of a neural network consisting of P layers can be defined as follows:
f(x;W) = φP (W P (φP−1(· · ·φ2(W 2φ1(W 1x)))), (1) where φi(.) is the element-wise activation function, e.g., ReLU and Sigmoid, of the ith layer and W = {W 1, . . . ,W P } are the corresponding weights of the network. The parameters of f(x;W) are optimized by minimizing the empirical loss:
L̂(f) = 1
N N∑ i=1 l ( f(xi;W), yi ) , (2)
where l(·) is the loss function, and {xi, yi}Ni=1 are the training samples and their associated groundtruth labels. The loss is minimized using the gradient decent-based optimization coupled with backpropagation.
However, neural networks are often over-parameterized, i.e., have more parameters than data. As a result, they tend to overfit to the training samples and not generalize well on unseen examples (Goodfellow et al., 2016). While research on Double descent (Belkin et al., 2019; Advani et al., 2020; Nakkiran et al., 2020) shows that over-parameterization does not necessarily lead to overfitting, avoiding overfitting has been extensively studied (Neyshabur et al., 2018; Nagarajan & Kolter,
2019; Poggio et al., 2017) and various approaches and strategies have been proposed, such as data augmentation (Goodfellow et al., 2016), regularization (Kukačka et al., 2017; Bietti et al., 2019; Arora et al., 2019), and dropout (Hinton et al., 2012b; Wang et al., 2019; Lee et al., 2019; Li et al., 2016), to close the gap between the empirical loss and the expected loss.
Diversity of learners is widely known to be important in ensemble learning (Li et al., 2012; Yu et al., 2011) and, particularly in deep learning context, diversity of information extracted by the network neurons has been recognized as a viable way to improve generalization (Xie et al., 2017a; 2015b). In most cases, these efforts have focused on making the set of weights more diverse (Yang et al.; Malkin & Bilmes, 2009). However, diversity of the activation has not received much attention.
Inspired by the motivation of dropout to co-adapt neuron activation, Cogswell et al. (2016) proposed to regularize the activations of the network. An additional loss using cross-covariance of hidden activations was proposed, which encourages the neurons to learn diverse or non-redundant representations. The proposed approach, known as Decov, has empirically been proven to alleviate overfitting and to improve the generalization ability of neural network, yet a theoretical analysis to prove this has so far been lacking.
In this work, we propose a novel approach to encourage activation diversity within the same layer. We propose complementing ’between-layer’ feedback with additional ’within-layer’ feedback to penalize similarities between neurons on the same layer. Thus, we encourage each neuron to learn a distinctive representation and to enrich the data representation learned within each layer. Moreover, inspired by Xie et al. (2015b), we provide a theoretical analysis showing that the within-layer activation diversity boosts the generalization performance of neural networks and reduces overfitting.
Our contributions in this paper are as follows:
• Methodologically, we propose a new approach to encourage the ’diversification’ of the layer-wise feature maps’ outputs in neural networks. The proposed approach has three variants based on how the global diversity is defined. The main intuition is that by promoting the within-layer activation diversity, neurons within the same layer learn distinct patterns and, thus, increase the overall capacity of the model.
• Theoretically, we analyse the effect the within-layer activation diversity on the generalization error bound of neural network. The analysis is presented in Section 3. As shown in Theorems 3.7, 3.8, 3.9, 3.10, 3.11, and 3.12, we express the upper-bound of the estimation error as a function of the diversity factor. Thus, we provide theoretical evidence that the within-layer activation diversity can help reduce the generalization error.
• Empirically, we show that the within-layer activation diversity boosts the performance of neural networks. Experimental results show that the proposed approach outperforms the competing methods.
2 WITHIN-LAYER ACTIVATION DIVERSITY
We propose a diversification strategy, where we encourage neurons within a layer to activate in a mutually different manner, i.e., to capture different patterns. To this end, we propose an additional within-layer loss which penalizes the neurons that activate similarly. The loss function L̂(f) defined in equation 2 is augmented as follows:
L̂aug(f) = L̂(f) + λ P∑ i=1 J i, (3)
where J i expresses the overall pair-wise similarity of the neurons within the ith layer and λ is the penalty coefficient for the diversity loss. As in (Cogswell et al., 2016), our proposed diversity loss can be applied to a single layer or multiple layers in a network. For simplicity, let us focus on a single layer.
Let φin(xj) and φ i m(xj) be the outputs of the n th and mth neurons in the ith layer for the same input sample xj . The similarity snm between the the nth and mth neurons can be obtained as the average similarity measure of their outputs for N input samples. We use the radial basis function to
express the similarity:
snm = 1
N N∑ j=1 exp ( − γ||φin(xj)− φim(xj)||2 ) , (4)
where γ is a hyper-parameter. The similarity snm can be computed over the whole dataset or batchwise. Intuitively, if two neurons n andm have similar outputs for many samples, their corresponding similarity snm will be high. Otherwise, their similarity smn is small and they are considered “diverse”. Based on these pair-wise similarities, we propose three variants for the global diversity loss J i of the ith layer:
• Direct: J i = ∑ n 6=m snm. In this variant, we model the global layer similarity directly
as the sum of the pairwise similarities between the neurons. By minimizing their sum, we encourage the neurons to learn different representations.
• Det: J i = −det(S), where S is a similarity matrix defined as Snm = snm. This variant is inspired by the Determinantal Point Process (DPP) (Kulesza & Taskar, 2010; 2012), as the determinant of S measures the global diversity of the set. Geometrically, det(S) is the volume of the parallelepiped formed by vectors in the feature space associated with s. Vectors that result in a larger volume are considered to be more “diverse”. Thus, maximizing det(·) (minimizing −det(·)) encourages the diversity of the learned features.
• Logdet: J i = −logdet(S)1. This variant has the same motivation as the second one. We use logdet instead of det as logdet is a convex function over the positive definite matrix space.
It should be noted here that the first proposed variant, i.e., direct, similar to Decov (Cogswell et al., 2016), captures only the pairwise diversity between components and is unable to capture the higherorder “diversity”, whereas the other two variants consider the global similarity and are able to measure diversity in a more global manner.
Our newly proposed loss function defined in equation 3 has two terms. The first term is the classic loss function. It computes the loss with respect to the ground-truth. In the back-propagation, this feedback is back-propagated from the last layer to the first layer of the network. Thus, it can be considered as a between-layer feedback, whereas the second term is computed within a layer. From equation 3, we can see that our proposed approach can be interpreted as a regularization scheme. However, regularization in deep learning is usually applied directly on the parameters, i.e., weights (Goodfellow et al., 2016; Kukačka et al., 2017), while in our approach, similar to (Cogswell et al., 2016), an additional term is defined over the output maps of the layers. For a layer with C neurons and a batch size of N , the additional computational cost is O(C2(N + 1)) for direct variant and O(C3 + C2N)) for both the determinant and log of the determinant variants.
3 GENERALIZATION ERROR ANALYSIS
In this section, we analyze how the proposed within-layer diversity regularizer affects the generalization error of a neural network. Generalization theory (Zhang et al., 2017; Kawaguchi et al., 2017) focuses on the relation between the empirical loss, as defined in equation 2, and the expected risk defined as follows:
L(f) = E(x,y)∼Q[l(f(x), y)], (5)
where Q is the underlying distribution of the dataset. Let f∗ = argminf L(f) be the expected risk minimizer and f̂ = argminf L̂(f) be the empirical risk minimizer. We are interested in the estimation error, i.e., L(f∗)−L(f̂), defined as the gap in the loss between both minimizers (Barron, 1994). The estimation error represents how well an algorithm can learn. It usually depends on the complexity of the hypothesis class and the number of training samples (Barron, 1993; Zhai & Wang, 2018).
1This is defined only if S is positive definite. It can be shown that in our case S is positive semi-definite. Thus, in practice we use a regularized version (S + I) to ensure the positive definiteness.
Several techniques have been used to quantify the estimation error, such as PAC learning (Hanneke, 2016; Arora et al., 2018), VC dimension (Sontag, 1998; Harvey et al., 2017; Bartlett et al., 2019), and the Rademacher complexity (Xie et al., 2015b; Zhai & Wang, 2018; Tang et al., 2020). The Rademacher complexity has been widely used as it usually leads to a tighter generalization error bound (Sokolic et al., 2016; Neyshabur et al., 2018; Golowich et al., 2018). The formal definition of the empirical Rademacher complexity is given as follows: Definition 3.1. (Bartlett & Mendelson, 2002) For a given dataset with N samples D = {xi, yi}Ni=1 generated by a distribution Q and for a model space F : X → R with a single dimensional output, the empirical Rademacher complexityRN (F) of the set F is defined as follows:
RN (F) = Eσ [ sup f∈F 1 N N∑ i=1 σif(xi) ] , (6)
where the Rademacher variables σ = {σ1, · · · , σN} are independent uniform random variables in {−1, 1}.
In this work, we analyse the estimation error bound of a neural network using the Rademacher complexity and we are interested in the effect of the within-layer diversity on the estimation error. In order to study this effect, inspired by (Xie et al., 2015b), we assume that with a high probability τ, the distance between the output of each pair of neurons, (φn(x)−φm(x))2, is lower bounded by dmin for any input x. Note that this condition can be expressed in terms of the similarity s defined in equation 4: snm ≤ e(−γdmin) = smin for any two distinct neurons with the probability τ . Our analysis starts with the following lemma: Lemma 3.2. (Xie et al., 2015b; Bartlett & Mendelson, 2002) With a probability of at least 1− δ
L(f̂)− L(f∗) ≤ 4RN (A) +B √ 2 log(2/δ)
N (7)
for B ≥ supx,y,f |l(f(x), y)|, whereRN (A) is the Rademacher complexity of the loss set A.
It upper-bounds the estimation error using the Rademacher complexity defined over the loss set and supx,y,f |l(f(x), y)|. Our analysis continues by seeking a tighter upper bound of this error and showing how the within-layer diversity, expressed with dmin, affects this upper bound. We start by deriving such an upper-bound for a simple network with one hidden layer trained for a regression task and then we extend it for a general multi-layer network and for different losses.
3.1 SINGLE HIDDEN-LAYER NETWORK
Here, we consider a simple neural network with one hidden-layer with M neurons and onedimensional output trained for a regression task. The full characterization of the setup can be summarized in the following assumptions: Assumptions 1.
• The activation function of the hidden layer, φ(t), is a Lφ-Lipschitz continuous function.
• The input vector x ∈ RD satisfies ||x||2 ≤ C1.
• The output scalar y ∈ R satisfies |y| ≤ C2.
• The weight matrix W = [w1,w2, · · · ,wM ] ∈ RD×M connecting the input to the hidden layer satisfies ||wm||2 ≤ C3.
• The weight vector v ∈ RM connecting the hidden-layer to the output neuron satisfies ||v||2 ≤ C4.
• The hypothesis class is F = {f |f(x) = ∑M m=1 vmφm(x) = ∑M m=1 vmφ(w T mx)}.
• Loss function set is A = {l|l(f(x), y) = 12 |f(x)− y| 2}.
• With a probability τ , for n 6= m, ||φn(x)− φm(x)||22 = ||φ(wTnx)− φ(wTmx)||22 ≥ dmin.
We recall the following two lemmas related to the estimation error and the Rademacher complexity: Lemma 3.3. (Bartlett & Mendelson, 2002) For F ∈ RX , assume that g : R −→ R is a Lg-Lipschitz continuous function and A = {g ◦ f : f ∈ F}. Then we have
RN (A) ≤ LgRN (F). (8) Lemma 3.4. (Xie et al., 2015b) Under Assumptions 1, the Rademacher complexity RN (F) of the hypothesis class F = {f |f(x) = ∑M m=1 vmφm(x) = ∑M m=1 vmφ(w T mx)} can be upper-bounded as follows:
RN (F) ≤ 2LφC134 √ M√
N + C4|φ(0)|
√ M√
N , (9)
where C134 = C1C3C4 and φ(0) is the output of the activation function at the origin.
Lemma 3.4 provides an upper-bound of the Rademacher complexity for the hypothesis class. In order to find an upper-bound for our estimation error, we start by deriving an upper bound for supx,f |f(x)|: Lemma 3.5. Under Assumptions 1, with a probability at least τQ, we have
sup x,f |f(x)| ≤
√ J , (10)
where Q is equal to the number of neuron pairs defined by M neurons, i.e., Q = M(M−1)2 , and J = C24 ( MC25 +M(M − 1)(C25 − d2min/2) ) and C5 = LφC1C3 + φ(0),
The proof can be found in Appendix 7.1. Note that in Lemma 3.5, we have expressed the upperbound of supx,f |f(x)| in terms of dmin. Using this bound, we can now find an upper-bound for supx,f,y |l(f(x), y)| in the following lemma: Lemma 3.6. Under Assumptions 1, with a probability at least τQ, we have
sup x,y,f
|l(f(x), y)| ≤ ( √ J + C2)2. (11)
The proof can be found in Appendix 7.2. The main goal is to analyze the estimation error bound of the neural network and to see how its upper-bound is linked to the diversity, expressed by dmin, of the different neurons. The main result is presented in Theorem 3.7. Theorem 3.7. Under Assumptions 1, with probability at least τQ(1− δ), we have
L(f̂)−L(f∗) ≤ 8 (√ J +C2 )( 2LφC134+C4|φ(0)| )√M√ N +( √ J +C2)2 √ 2 log(2/δ) N (12)
where C134 = C1C3C4, J = C24 ( MC25 +M(M −1)(C25 −d2min/2) ) , and C5 = LφC1C3+φ(0).
The proof can be found in Appendix 7.3. Theorem 3.7 provides an upper-bound for the estimation error. We note that it is a decreasing function of dmin. Thus, we say that a higher dmin, i.e., more diverse activations, yields a lower estimation error bound. In other words, by promoting the withinlayer diversity, we can reduce the generalization error of neural networks. It should be also noted that our Theorem 3.7 has a similar form to Theorem 1 in (Xie et al., 2015b). However, the main difference is that Xie et al. analyse the estimation error with respect to the diversity of the weight vectors. Here, we consider the diversity between the outputs of the activations of the hidden neurons.
3.2 BINARY CLASSIFICATION
We now extend our analysis of the effect of the within-layer diversity on the generalization error in the case of a binary classification task, i.e., y ∈ {−1, 1}. The extensions of Theorem 3.7 in the case of a hinge loss and a logistic loss are presented in Theorems 3.8 and 3.9, respectively. Theorem 3.8. Using the hinge loss, we have with probability at least τQ(1− δ)
L(f̂)− L(f∗) ≤ 4 ( 2LφC134 + C4|φ(0)| )√M√ N + (1 + √ J ) √ 2 log(2/δ) N (13)
where C134 = C1C3C4, J = C24 (MC25 +M(M − 1)(C25 − d2min/2) ) , and C5 = LφC1C3+φ(0).
Theorem 3.9. Using the logistic loss l(f(x), y) = log(1 + e−yf(x)), we have with probability at least τQ(1− δ)
L(f̂)− L(f∗) ≤ 4 1 + e √ −J
( 2LφC134 + C4|φ(0)| )√M√ N + log(1 + e √ J ) √ 2 log(2/δ) N (14)
where C134 = C1C3C4, J = C24 (MC25 +M(M − 1)(C25 − d2min/2) ) , and C5 = LφC1C3+φ(0).
The proofs are similar to Lemmas 7 and 8 in (Xie et al., 2015b). As we can see, for the classification task, the error bounds of the estimation error for the hinge and logistic losses are decreasing with respect to dmin. Thus, employing a diversity strategy can improve the generalization also for the binary classification task.
3.3 MULTI-LAYER NETWORKS
Here, we extend our result for networks with P (> 1) hidden layers. We assume that the pair-wise distances between the activations within layer p are lower-bounded by dpmin with a probability τ
p. In this case, the hypothesis class can be defined recursively. In addition, we replace the fourth assumption in Assumptions 1 with: ||W p||∞ ≤ Cp3 for every W p, i.e., the weight matrix of the p-th layer. In this case, the main theorem is extended as follows:
Theorem 3.10. With probability of at least ∏P−1 p=0 (τ p)Q p (1− δ), we have
L(f̂)− L(f∗) ≤ 8( √ J + C2)
( (2Lφ)
PC1C 0 3√
N
P−1∏ p=0 √ MpCp3 + |φ(0)|√ N P−1∑ p=0 (2Lφ) P−1−p P−1∏ j=p √ M jCj3
)
+ (√ J + C2 )2√2 log(2/δ) N
(15)
where Qp is the number of neuron pairs in the pth layer, defined as Qp = M p(Mp−1)
2 , and J P is defined recursively using the following identities: J 0 = C03C1 and J p = MpCp2 ( Mp2(LφC p−1 3 J p−1 + φ(0))2 −M(M − 1) dpmin 2 2 ) ) , for p = 1, . . . , P .
The proof can be found in Appendix 7.4. In Theorem 3.10, we see thatJ P is decreasing with respect to dpmin. Thus, we see that maximizing the within-layer diversity, we can reduce the estimation error of a multi-layer neural network.
3.4 MULTIPLE OUTPUTS
Finally, we consider the case of a neural network with a multi-dimensional output, i.e., y ∈ RD. In this case, we can extend Theorem 3.7 by decomposing the problem into D smaller problems and deriving the global error bound as the sum of the small D bounds. This yields the following two theorems: Theorem 3.11. For a multivariate regression trained with the squared error, we have with probability at least τQ(1− δ),
L(f̂)−L(f∗) ≤ 8D( √ J +C2) ( 2LφC134+C4|φ(0)| )√M√ N +D( √ J +C2)2 √ 2 log(2/δ) N (16)
where C134 = C1C3C4, J = C24 (MC25 +M(M − 1)(C25 − d2min/2) )
and C5 = LφC1C3 + φ(0). Theorem 3.12. For a multi-class classification task using the cross-entropy loss, we have with probability at least τQ(1− δ),
L(f̂)− L(f∗) ≤ D(D − 1) D − 1 + e−2 √ J
( 2LφC134 + C4|φ(0)| )√M√ N + log ( 1 + (D − 1)e2 √ J )√2 log(2/δ)
N (17) where C134 = C1C3C4, J = C24 (MC25 +M(M − 1)(C25 − d2min/2) ) and C5 = LφC1C3+φ(0).
The proofs can be found in Appendix 7.5. Theorems 3.11 and 3.12 extend our result for the multidimensional regression and classification tasks, respectively. Both bounds are inversely proportional to the diversity factor dmin. We note that for the classification task, the upper-bound is exponentially decreasing with respect to dmin.
4 RELATED WORK
Diversity promoting strategies have been widely used in ensemble learning (Li et al., 2012; Yu et al., 2011), sampling (Derezinski et al., 2019; Bıyık et al., 2019; Gartrell et al., 2019), ranking (Yang et al.; Gan et al., 2020), and pruning by reducing redundancy (Kondo & Yamauchi, 2014; He et al., 2019; Singh et al., 2020; Lee et al., 2020). In the deep learning context, various approaches have used diversity as a direct regularizer on top of the weight parameters. Here, we present a brief overview of these regularizers. Based on the way diversity is defined, we can group these approaches into two categories. The first group considers the regularizers that are based on the pairwise dissimilarity of components, i.e., the overall set of weights are diverse if every pair of weights are dissimilar. Given the weight vectors {wm}Mm=1, Yu et al. (2011) define the regularizer as∑ mn(1− θmn), where θmn represents the cosine similarity betweenwm andwn. Bao et al. (2013)
proposed an incoherence score defined as − log (
1 M(M−1) ∑ mn β|θmn| 1 β ) , where β is a positive
hyperparameter. Xie et al. (2015a; 2016) used mean(θmn) − var(θmn) to regularize Boltzmann machines. They theoretically analyzed its effect on the generalization error bounds in (Xie et al., 2015b) and extended it to kernel space in (Xie et al., 2017a). The second group of regularizers considers a more globalist view of diversity. For example, in (Malkin & Bilmes, 2009; 2008; Xie et al., 2017b), a weight regularization based on the determinant of the weights covariance is proposed and based on determinantal point process in (Kulesza & Taskar, 2012; Kwok & Adams, 2012).
Unlike the aforementioned methods which promote diversity on the weight level and similar to our method, Cogswell et al. (2016) proposed to enforce dissimilarity on the feature map outputs, i.e., on the activations. To this end, they proposed an additional loss based on the pairwise covariance of the activation outputs. Their additional loss, LDecov is defined as the squared sum of the non-diagonal elements of the global covariance matrix C:
LDecov = 1
2 (||C||2F − ||diag(C)||22), (18)
where ||.||F is the Frobenius norm. Their approach, Decov, yielded superior empirical performance; however, it lacks theoretical proof. In this paper, we closed this gap and we showed theoretically how employing a diversity strategy on the network activations can indeed decrease the estimation error bound and improve the generalization of the model. Besides, we proposed variants of our approach which consider a global view of diversity.
5 EXPERIMENTAL RESULTS
In this section, we present an empirical study of our approach in a regression context using Boston Housing price dataset (Dua & Graff, 2017) and in a classification context using CIFAR10 and CIFAR100 datasets (Krizhevsky et al., 2009). We denote as Vanilla the model trained with no diversity protocol and as Decov the approach proposed in (Cogswell et al., 2016).
5.1 REGRESSION
For regression, we use the Boston Housing price dataset (Dua & Graff, 2017). It has 404 training samples and 102 test samples with 13 attributes each. We hold the last 100 sample of training as a validation set for the hyper-parameter tuning. The loss weight, is chosen from {0.00001, 0.00005, 0.0001, 0.0005, 0.001, 0.005} for both our approach and Decov (Cogswell et al., 2016). Parameter γ in the radial basis function is chosen from {0.00001, 0.0001, 0.01, 0.1.1, 10, 100}. As a base model, we use a neural network composed of two fully connected hidden layers, each with 64 neurons. The additional loss is applied on top of both hidden layers.
We train for 80 epochs using stochastic gradient descent with a learning rate of 0.01 and mean square error loss. For hyperparamrter tuning, we keep the model that perform best on the validation and use it in the test phase. We experiment with three different activation functions for the hidden layers: Sigmoid, Rectified Linear Units (ReLU) (Nair & Hinton, 2010), and LeakyReLU (Maas et al., 2013).
Table 1 reports the results in terms of the mean average error for the different approaches over the Boston Housing price dataset. First, we note that employing a diversification strategy (ours and Decov) boosts the results compared to the Vanilla approach for all types of activations. The three variants of our approach, i.e., the within-layer approach, consistently outperform the Decov loss except for the LeakyReLU where the latter outperforms our direct variant. Table 1 shows that the logdet variant of our approach yields the best performance for all three activation types.
5.2 CLASSIFICATION
For classification, we evaluate the performance of our approach on CIFAR10 and CIFAR100 datasets (Krizhevsky et al., 2009). They contain 60,000 32 × 32 images grouped into 10 and 100 distinct categories, respectively. We train on the 50,000 given training examples and test on the 10,000 specified test samples. We hold the last 10000 of the training set for validation. For the neural network model, we use an architecture composed of 3 convolutional layers. Each convolution layer is composed of 32 3 × 3 filters followed by 2 × 2 max pooling. The flattened output of the convolutional layers is connected to a fully connected layer with 128 neurons and a softmax layer. The different additional losses, i.e., ours and Decov, are added only on top of the fully connected layer. The models are trained for 150 epochs using stochastic gradient decent with a learning rate of 0.01 and categorical cross entropy loss. For hyper-paramters tuning, we keep the model that performs best on the validation set and use it in the test phase. We experiment with three different activation functions for the hidden layers: sigmoid, Rectified Linear Units (ReLU) (Nair & Hinton, 2010), and LeakyReLU (Maas et al., 2013). All reported results are average performance over 4 trials with the standard deviation indicated alongside.
Tables 2 and 3 report the test error rates of the different approaches for both datasets. Compared to the Vanilla network, our within-layer diversity strategies consistently improve the performance of the model. For the CIFAR10, the direct variant yields more than 0.72% improvement for the ReLU and 2% improvement for the sigmoid activation. For the LeakyReLU case, the determinant variant achieves the lowest error rate. This is in accordance with the results on CIFAR100. Here, we note that our proposed approach outperforms both the Vanilla and the Decov models, especially in the sigmoid case. Compared to the Vanilla approach, we note that the model training time cost on CIFAR100 increases by 9% for the direct approach, by 36.1% for the determinant variant, and by 36.2%for the log of determinant variant.
6 CONCLUSIONS
In this paper, we proposed a new approach to encourage ‘diversification’ of the layer-wise feature map outputs in neural networks. The main motivation is that by promoting within-layer activation diversity, neurons within the same layer learn to capture mutually distinct patterns. We proposed an additional loss term that can be added on top of any fully-connected layer. This term complements
the traditional ‘between-layer’ feedback with an additional ‘within-layer’ feedback encouraging diversity of the activations. We theoretically proved that the proposed approach decreases the estimation error bound, and thus improves the generalization ability of neural networks. This analysis was further supported by experimental results showing that such a strategy can indeed improve the performance of neural networks in regression and classification tasks. Our future work includes extensive experimental analysis on the relationship between the distribution of the neurons output and generalization.
7 APPENDIX
In the following proofs, we use Lipschitz analysis. In particular, a function f : A → R, A ⊂ Rn, is said to be L-Lipschitz, if there exist a constant L ≥ 0, such that |f(a) − f(b)| ≤ L||a − b|| for every pair of points a, b ∈ A. Moreover:
• supx∈A f ≤ sup(L||x||+ f(0)). • if f is continuous and differentiable, L = sup |f ′(x)|.
7.1 PROOF OF LEMMA 3.5
Lemma 3.5. Under Assumptions 1, with a probability at least τQ, we have sup x,f |f(x)| ≤ √ J , (19)
where Q is equal to the number of neuron pairs defined by M neurons, i.e. Q = M(M−1)2 , and J = C24 ( MC25 +M(M − 1)(C25 − d2min/2) ) and C5 = LφC1C3 + φ(0).
Proof.
f2(x) = ( M∑ m=1 vmφm(x) )2 ≤ ( M∑ m=1 ||v||∞φm(x) )2 ≤ ||v||2∞ ( M∑ m=1 φm(x) )2 ≤ C24 ( M∑ m=1 φm(x) )2
= C24 (∑ m,n φm(x)φn(x) ) = C24 ∑ m φm(x) 2 + ∑ m 6=n φn(x)φm(x) (20) We have supw,x φ(x) < sup(Lφ|wTx| + φ(0)) because φ is Lφ-Lipschitz. Thus, ||φ||∞ < LφC1C3 + φ(0) = C5. For the first term in equation 20, we have ∑ m φm(x)
2 < M(LφC1C3 + φ(0))
2 = MC25 . The second term, using the identity φm(x)φn(x) = 1 2 ( φm(x) 2 + φn(x) 2 − (φm(x)− φn(x))2 ) , can be rewritten as∑
m 6=n
φm(x)φn(x) = 1
2 ∑ m 6=n φm(x) 2 + φn(x) 2 − ( φm(x)− φn(x) )2 . (21)
In addition, we have with a probability τ , ||φm(x) − φn(x)||2 ≥ dmin for m 6= n. Thus, we have with a probability at least τQ:∑
m 6=n
φm(x)φn(x) ≤ 1
2 ∑ m6=n (2C25 − d2min) =M(M − 1)(C25 − d2min/2). (22)
Here Q is equal to the number of neuron pairs defined by M neurons, i.e, Q = M(M−1)2 . By putting everything back to equation 20, we have with a probability τQ,
f2(x) ≤ C24 ( MC25 +M(M − 1)(C25 − d2min/2) ) = J . (23)
Thus, with a probability τQ,
sup x,f |f(x)| ≤ √ sup x,f f(x)2 ≤ √ J . (24)
7.2 PROOF OF LEMMA 3.6
Lemma 3.6. Under Assumptions 1, with a probability at least τQ, we have sup x,y,f |l(f(x), y)| ≤ ( √ J + C2)2 (25)
Proof. We have supx,y,f |f(x) − y| ≤ 2 supx,y,f (|f(x)| + |y|) = 2( √ J + C2). Thus supx,y,f |l(f(x), y)| ≤ ( √ J + C2)2.
7.3 PROOF OF THEOREM 3.7
Theorem 3.7. Under Assumptions 1, with probability at least τQ(1− δ), we have
L(f̂)−L(f∗) ≤ 8 (√ J +C2 )( 2LφC134+C4|φ(0)| )√M√ N +( √ J +C2)2 √ 2 log(2/δ) N (26)
where C134 = C1C3C4, J = C24 ( MC25 +M(M −1)(C25 −d2min/2) ) , and C5 = LφC1C3+φ(0).
Proof. Given that l(·) is K-Lipschitz with a constant K = supx,y,f |f(x)− y| ≤ 2( √ J +C2), and using Lemma 3.3, we can show that RN (A) ≤ KRN (F) ≤ 2( √ J + C2)RN (F). For RN (F), we use the bound found in Lemma 3.4. Using Lemmas 3.2 and 3.6 completes the proof.
7.4 PROOF OF THEOREM 3.10
Theorem 3.10. Under Assumptions 1, with probability of at least ∏P−1 p=0 (τ p)Q p (1− δ), we have
L(f̂)− L(f∗) ≤ 8( √ J + C2)
( (2Lφ)
PC1C 0 3√
N
P−1∏ p=0 √ MpCp3 + |φ(0)|√ N P−1∑ p=0 (2Lφ) P−1−p P−1∏ j=p √ M jCj3
)
+ (√ J + C2 )2√2 log(2/δ) N
(27)
where Qp is the number of neuron pairs in the pth layer, defined as Qp = M p(Mp−1)
2 , and J P is defined recursively using the following identities: J 0 = C03C1 and J p = MpCp2 ( Mp2(LφC p−1 3 J p−1 + φ(0))2 −M(M − 1)d2min/2) ) , for p = 1, . . . , P .
Proof. Lemma 5 in (Xie et al., 2015b) provides an upper-bound for the hypothesis class. We denote by vp denote the outputs of the pth hidden layer before applying the activation function:
v0 = [w0 T 1 x, ....,w 0T M0x] (28)
vp = [ Mp−1∑ j=1 wpj,1φ(v p−1 j ), ...., Mp−1∑ j=1 wpj,Mpφ(v p−1 j )] (29)
vp = [wp1 T φp, ...,wpMp T φp], (30)
where φp = [φ(vp−11 ), · · · , φ(v p−1 Mp−1)]. We have
||vp||22 = Mp∑ m=1 (wpm Tφp)2 (31)
and wpm Tφp ≤ Cp3 ∑ n φ p n. Thus,
||vp||22 ≤ Mp∑ m=1 (Cp3 ∑ n φpn) 2 =MpCp3 2 ( ∑ n φpn) 2 =MpCp3 2 ∑ mn φpmφ p n. (32)
We use the same decomposition trick of φpmφ p n as in the proof of Lemma 3.5. We need to bound supx φ p:
sup x φp < sup(Lφ|wp−1j T vp−1|+ φ(0)) < Lφ||W p−1||∞||vp−1||22 + φ(0). (33)
Thus, we have ||vp||22 ≤MpCp 2(M2(LφCp−13 ||vp−1||22 + φ(0))2 −M(M − 1)d2min/2)) = J P . (34) We found a recursive bound for ||vp||22, we note that for p = 0, we have ||v0||22 ≤ ||W 0||∞C1 ≤ C03C1 = J 0. Thus,
sup x,fP∈FP |f(x)| = sup x,fP∈FP
|vP | ≤ √ J P . (35)
7.5 PROOFS OF THEOREMS 3.11 AND 3.12
Theorem 3.11. For a multivariate regression trained with the squared error, we have with probability at least τQ(1− δ),
L(f̂)−L(f∗) ≤ 8D( √ J +C2) ( 2LφC134+C4|φ(0)| )√M√ N +D( √ J +C2)2 √ 2 log(2/δ) N (36)
where C134 = C1C3C4, J = C24 (MC25 +M(M − 1)(C25 − d2min/2) ) , and C5 = LφC1C3+φ(0).
Proof. The squared loss ||f(x) − y||2 can be decomposed into D terms (f(x)k − yk)2. Using Theorem 3.7, we can derive the bound for each term.
Theorem 3.12. For a multiclass classification task using the cross-entropy loss, we have with probability at least τQ(1− δ),
L(f̂)− L(f∗) ≤ D(D − 1) D − 1 + e−2 √ J
( 2LφC134 + C4|φ(0)| )√M√ N + log ( 1 + (D − 1)e2 √ J )√2 log(2/δ)
N (37) where C134 = C1C3C4, J = C24 (MC25 +M(M −1)(C25 −d2min/2) ) , and C5 = LφC1C3+φ(0).
Proof. Using Lemma 9 in (Xie et al., 2015b), we have supf,x,y l = log ( 1 + (D − 1)e2 √ J ) and l is D−1 D−1+e−2 √ J -Lipschitz. Thus, using the decomposition property of the Rademacher complexity, we have
Rn(A) ≤ D(D − 1)
D − 1 + e−2 √ J
( 2LφC134
√ M√
N + C4|φ(0)|
√ M√
N
) . (38) | 1. What are the strengths and weaknesses of the paper regarding its contributions to the field?
2. How does the reviewer assess the theoretical foundations of the paper, particularly in relation to prior works?
3. Are there any concerns about the experimental validation presented in the paper?
4. How does the reviewer evaluate the significance of the results, considering the limitations of shallow networks and lack of description of the optimization strategy?
5. Is there anything else the reviewer would like to know or discuss regarding the paper's content? | Review | Review
Strong point: the paper addresses an important problem.
Three main weaknesses, which justify the score: • The theoretical developments presented in the paper build on the Rademacher complexity, but ignore the conclusions drawn by Zhang et al. in Section 2.2 of their ICLR 2017 paper (Understanding deep learning requires rethinking generalization). • The theoretical developments build on the assumption that (i) there exists a lower bound, valid for any input, to the distance between the output of each pair of neurons, and (ii) the proposed diversity loss increases this lower bound. Those two assumptions are central to the theoretical developments, but are quite arguable. For example, a pair of neuron that is not activated by a sample, which is quite common, leads to a zero lower bound. • Experimental validation are not convincing. Only shallow networks are considered (2 or 3 layers), and the optimization strategy, including the grid search strategy for hyperparameters selection, is not described.
Minor issue: positioning with respect to related works is limited. For example, layer redundancy (which is the opposite of diversity) has been considered in the context of network pruning: https://openaccess.thecvf.com/content_CVPR_2019/papers/He_Filter_Pruning_via_Geometric_Median_for_Deep_Convolutional_Neural_Networks_CVPR_2019_paper.pdf |
ICLR | Title
Learning Sparse Latent Representations with the Deep Copula Information Bottleneck
Abstract
Deep latent variable models are powerful tools for representation learning. In this paper, we adopt the deep information bottleneck model, identify its shortcomings and propose a model that circumvents them. To this end, we apply a copula transformation which, by restoring the invariance properties of the information bottleneck method, leads to disentanglement of the features in the latent space. Building on that, we show how this transformation translates to sparsity of the latent space in the new model. We evaluate our method on artificial and real data.
1 INTRODUCTION
In recent years, deep latent variable models (Kingma & Welling, 2013; Rezende et al., 2014; Goodfellow et al., 2014) have become a popular toolbox in the machine learning community for a wide range of applications (Ledig et al., 2016; Reed et al., 2016; Isola et al., 2016). At the same time, the compact representation, sparsity and interpretability of the latent feature space have been identified as crucial elements of such models. In this context, multiple contributions have been made in the field of relevant feature extraction (Chalk et al., 2016; Alemi et al., 2016) and learning of disentangled representations of the latent space (Chen et al., 2016; Bouchacourt et al., 2017; Higgins et al., 2017).
In this paper, we consider latent space representation learning. We focus on disentangling features with the copula transformation and, building on that, on forcing a compact low-dimensional representation with a sparsity-inducing model formulation. To this end, we adopt the deep information bottleneck (DIB) model (Alemi et al., 2016) which combines the information bottleneck and variational autoencoder methods. The information bottleneck (IB) principle (Tishby et al., 2000) identifies relevant features with respect to a target variable. It takes two random vectors x and y and searches for a third random vector twhich, while compressing x, preserves information contained in y. A variational autoencoder (VAE) (Kingma & Welling, 2013; Rezende et al., 2014) is a generative model which learns a latent representation t of x by using the variational approach.
Although DIB produces good results in terms of image classification and adversarial attacks, it suffers from two major shortcomings. First, the IB solution only depends on the copula of x and y and is thus invariant to strictly monotone transformations of the marginal distributions. DIB does not preserve this invariance, which means that it is unnecessarily complex by also implicitly modelling the marginal distributions. We elaborate on the fundamental issues arising from this lack of invariance in Section 3. Second, the latent space of the IB is not sparse which results in the fact that a compact feature representation is not feasible.
Our contribution is two-fold: In the first step, we restore the invariance properties of the information bottleneck solution in the DIB. We achieve this by applying a transformation of x and y which makes the latent space only depend on the copula. This is a way to fully represent all the desirable features inherent to the IB formulation. The model is also simplified by ensuring robust and fully non-parametric treatment of the marginal distributions. In addition, the problems arising from the lack of invariance to monotone transformations of the marginals are solved. In the second step, once the invariance properties are restored, we exploit the sparse structure of the latent space of DIB. This is possible thanks to the copula transformation in conjunction with using the sparse parametrisation
∗These authors contributed equally.
of the information bottleneck, proposed by (Rey et al., 2014). It translates to a more compact latent space that results in a better interpretability of the model.
The remainder of this paper is structured as follows: In Section 2, we review publications on related models. Subsequently, in Section 3, we describe the proposed copula transformation and show how it fixes the shortcomings of DIB, as well as elaborate on the sparsity induced in the latent space. In Section 4, we present results of both synthetic and real data experiments. We conclude our paper in Section 5.
2 RELATED WORK
The IB principle was introduced by (Tishby et al., 2000). The idea is to compress the random vector x while retaining the information of the random vector y. This is achieved by solving the following variational problem: minp(t|x)I(x; t)−λI(t; y), with the assumption that y is conditionally independent of t given x, and where I stands for mutual information. In recent years, copula models were combined with the IB principle in (Rey & Roth, 2012) and extended to the sparse meta-Gaussian IB (Rey et al., 2014) to become invariant against strictly monotone transformations. Moreover, the IB method has been applied to the analysis of deep neural networks in (Tishby & Zaslavsky, 2015), by quantifying mutual information between the network layers and deriving an information theoretic limit on DNN efficiency.
The variational bound and reparametrisation trick for autoencoders were introduced in (Kingma & Welling, 2013; Rezende et al., 2014). The variational autoencoder aims to learn the posterior distribution of the latent space p(t|x) and the decoder p(x|t). The general idea of combining the two approaches is to identify the solution t of the information bottleneck with the latent space t of the variational autoencoder. Consequently, the terms I(x; t) and I(t; y) in the IB problem can be expressed in terms of the parametrised conditionals p(t|x), p(y|t). Variational lower bounds on the information bottleneck optimisation problem have been considered in (Chalk et al., 2016) and (Alemi et al., 2016). Both approaches, however, treat the differential entropy of the marginal distribution as a positive constant, which is not always justified (see Section 3). A related model is introduced in (Pereyra et al., 2017), where a penalty on the entropy of output distributions of neural networks is imposed. These approaches do not introduce the invariance against strictly monotone transformations and thus do not address the issues we identify in Section 3.
A sizeable amount of work on modelling the latent space of deep neural networks has been done. The authors of (Alvarez & Salzmann, 2016) propose the use of a group sparsity regulariser. Other techniques, e.g. in (Mozer & Smolensky, 1989) are based on removing neurons which have a limited impact on the output layer, but they frequently do not scale well with the overall network size. More recent approaches include training neural networks of smaller size to mimic a deep network (Hinton et al., 2015; Romero et al., 2014). In addition, multiple contributions have been proposed in the area of latent space disentanglement (Chen et al., 2016; Bouchacourt et al., 2017; Higgins et al., 2017; Denton & Birodkar, 2017). None of the approaches consider the influence of the copula on the modelled latent space.
Copula models have been proposed in the context of Bayesian variational methods in (Suh & Choi, 2016), (Tran et al., 2015) and (Han et al., 2016). The former approaches focus on treating the latent space variables as indicators of local approximations of the original space. None of the three approaches relate to the information bottleneck framework.
3 MODEL
3.1 FORMULATION
In order to specify our model, we start with a parametric formulation of the information bottleneck: max φ,θ −Iφ(t;x) + λIφ,θ(t; y), (1)
where I stands for mutual information with its parameters in the subscript. A parametric form of the conditionals pφ(t|x) and pθ(y|t) as well as the information bottleneck Markov chain t − x − y are assumed. A graphical illustration of the proposed model is depicted in Figure 1.
The two terms in Eq. (1) have the following forms:
Iφ(T ;X) = DKL (p(t|x)p(x)‖p(t)p(x)) = Ep(x)DKL (pφ(t|x)‖p(t)) , (2) and
Iφ,θ(T ;Y ) = DKL
([∫ p(t|y, x)p(y, x) dx ] ‖p(t)p(y) ) = Ep(x,y)Epφ(t|x) log pθ(y|t) + h(Y ),
(3)
because of the Markov assumption in the information bottleneck model pφ(t|x, y) = pφ(t|x). We denote with h(y) = −Ep(y)[log p(y)] the entropy for discrete y and the differential entropy for continuous y. We then assume a conditional independence copula and Gaussian margins:
pφ(t|x) = cT |X(u(t|x)|x) ∏ j pφj (tj |x) = ∏ j N(tj |µj(x), σ2j (x))
where tj is the jth marginal of t = (t1, . . . , td), ct|x is the copula density of t|x, u(t|x) := Ft|x(t|x) is the uniform density indexed by t|x, and the functions µj(x), σ2j (x) are implemented by deep networks. We make the same assumption about pθ(y|t).
3.2 MOTIVATION
As we stated in Section 1, the deep information bottleneck model derived in Section 3.1 is not invariant to strictly increasing transformations of the marginal distributions. The IB method is formulated in terms of mutual information I(x, y), which depends only on the copula and therefore does not depend on monotone transformations of the marginals: I(x, y) = MI(x, y) −MI(x) −MI(y), where MI(x), for x = (x1, . . . , xd), denotes the multi-information, which is equal to the negative copula entropy, as shown by Ma & Sun (2011):
MI(X) := DKL(p(x)‖ ∏ j pj(xj)) = ∫ cX(u(x)) log cX(u(x))du = −h(cX(u(x))). (4)
Issues with lack of invariance to marginal transformations.
1. On the encoder side (Eq. (2)), the optimisation is performed over the parametric conditional margins pφ(tj |x) in Iφ(t;x) = Ep(x)DKL (pφ(t|x)‖p(t)). When a monotone transformation xj → x̃j is applied, the required invariance property can only be guaranteed if the model for φ (in our case a deep network) is flexible enough to compensate for this transformation, which can be a severe problem in practice (see example in Section 4.1).
2. On the decoder side, assuming Gaussian margins in pθ(yj |t) might be inappropriate for modelling y if the domain of y is not equal to the real numbers, e.g. when y is defined only on a bounded
interval. If used in a generative way, the model might produce samples outside the domain of y. Even if other distributions than Gaussian are considered, such as truncated Gaussian, one still needs to make assumptions concerning the marginals. According to the IB formulation, such assumptions are unnecessary.
3. Also on the decoder side, we have: Iφ(t; y) = Ep(x,y)Epφ(t|x) log pθ(y|t) + h(y). The authors of Alemi et al. (2016) argue that since h(y) is constant, it can be ignored in computing Iφ(t; y). This is true for a fixed or for a discrete y, but not for the class of monotone transformations of y, which should be the case for a model specified with mutual informations only. Since the left hand side of this equation (Iφ(t; y)) is invariant against monotone transformations, and h(y) in general depends on monotone transformations, the first term on the right hand side (Ep(x,y)Epφ(t|x) log pθ(y|t)) cannot be invariant to monotone transformations. In fact, under such transformations, the differential entropy h(y) can take any value from −∞ to +∞, which can be seen easily by decomposing the entropy into the copula entropy and the sum of marginal entropies (here, j stands for the jth dimension):
h(y) = h(cy(u(y))) + ∑ j h(yj) = −MI(y) + ∑ j h(yj). (5)
The first term (i.e. the copula entropy which is equal to the negative multi-information, as in Eq. (4)) is a non-positive number. The marginal entropies h(yj) can take any value when using strictly increasing transformations (for instance, the marginal entropy of a uniform distribution on [a, b] is log(b − a)). As a consequence, the entropy term h(y) in Eq. (3) can be treated as a constant only either for one specific y or for discrete y, but not for all elements of the equivalence class containing all monotone transformations of y. Moreover, every such transformation would lead to different (I(x, t), I(y, t)) pairs in the information curve, which basically makes this curve arbitrary. Thus, h(y) being constant is a property that needs to be restored.
3.3 PROPOSED SOLUTION
The issues described in Section 3.2 can be fixed by using transformed variables (for a d dimensional x = (x1, . . . , xd), xj stands for the jth dimension):
x̃j = Φ −1(F̂ (xj)), xj = F̂ −1(Φ(x̃j)), (6)
where Φ is the Gaussian cdf and F̂ is the empirical cdf. The same transformation is applied to y. In the copula literature, these transformed variables are sometimes called normal scores. Note that the mapping is (approximately) invertible: xj = F̂−1(Φ(x̃j)), with F̂−1 being the empirical quantiles treated as a function (e.g. by linear interpolation). This transformation fixes the invariance problem on the encoding side (issue 1), as well as the problems on the decoding side: problem 2 disappeared because the transformed variables x̃j are standard normal distributed, and problem 3 disappeared because the decoder part (Eq. (3)) now has the form:
Ep(x̃,ỹ)Epφ(t|x̃) log pθ(ỹ|t) = Iφ(t; ỹ) +MI(ỹ)− ∑ j h(ỹj) = Iφ(t; ỹ)− h(cinv(u(ỹ))) (7)
where cinv(u(ỹ)) is indeed constant for all strictly increasing transformations applied to y.
Having solved the IB problem in the transformed space, we can go back to the original space by using the inverse transformation according to Eq. (6) xj = F̂−1(Φ(x̃j)). The resulting model is thus a variational autoencoder with x replaced by x̃ in the first term and y replaced by ỹ in the second term.
Technical details. We assume a simple prior p(t) = N (t; 0, I). Therefore, the KL divergence DKL (pφ(t|x̃)‖p(t)) is a divergence between two Gaussian distributions and admits an analytical form. We then estimate
I(t; x̃) = Ep(x̃)DKL (pφ(t|x̃)‖p(t)) ≈ 1
n ∑ i DKL (pφ(t|x̃i)‖p(t)) (8)
and all the gradients on (mini-)batches.
For the decoder side, Ep(x̃,ỹ)Epφ(t|x̃) log pθ(ỹ|t) is needed. We train our model using the backpropagation algorithm. However, this algorithm can only handle deterministic nodes. In order to overcome
this problem, we make use of the reparametrisation trick (Kingma & Welling, 2013; Rezende et al., 2014):
I(t; ỹ) = Ep(x̃,ỹ)E ∼N (0,I) ∑ j log pθ(ỹj |t = ~µj(x̃) + diag(σj(x̃)) ) + const., (9)
with ỹj = Φ−1(F̂ (yj)).
3.4 SPARSITY OF THE LATENT SPACE
In this section we explain how the sparsity constraint on the information bottleneck along with the copula transformation result in sparsity of the latent space t. We first introduce the Sparse Gaussian Information Bottleneck and subsequently show how augmenting it with the copula transformation leads to the sparse t.
Sparse Gaussian Information Bottleneck. Recall that the information bottleneck compresses x to a new variable t by minimising I(x; t)− λI(t; y). This ensures that some amount of information with respect to a second “relevance” variable y is preserved in the compression.
The assumption that x and y are jointly Gaussian-distributed leads to the Gaussian Information Bottleneck (Chechik et al., 2005) where the solution t can be proved to also be Gaussian distributed. In particular, if we denote the marginal distribution of x: x ∼ N (0,Σx), the optimal t is a noisy projection of x of the following form:
t = Ax+ ξ, ξ ∼ N (0, I) ⇒ t|x ∼ N (Ax, I), t ∼ N (0, AΣxA> + I). The mutual information between x and t is then equal to: I(x; t) = 12 log |AΣxA> + I|. In the sparse Gaussian Information Bottleneck, we additionally assume that A is diagonal, so that the compressed t is a sparse version of x. Intuitively, sparsity follows from the observation that for a pair of random variables x, x′, any full-rank projection Ax′ of x′ would lead to the same mutual information since I(x, x′) = I(x;Ax′), and a reduction in mutual information can only be achieved by a rank-deficient matrix A. For diagonal projections, this immediately implies sparsity of A.
Sparse latent space of the Deep Information Bottleneck. We now proceed to explain the sparsity induced in the latent space of the copula version of the DIB introduced in Section 3.3. We will assume a possibly general, abstract pre-transformation of x, fβ , which accounts for the encoder network along with the copula transformation of x. Then we will show how allowing for this abstract pre-transformation, in connection with the imposed sparsity constraint of the sparse information bottleneck described above, translates to the sparsity of the latent space of the copula DIB. By sparsity we understand the number of active neurons in the last layer of the encoder.
To this end, we use the Sparse Gaussian Information Bottleneck model described above. We analyse the encoder part of the DIB, described with I(x, t). Consider the general Gaussian Information Bottleneck (with x and y jointly Gaussian and a full matrixA) and the deterministic pre-transformation, fβ(x), performed on x. The pre-transformation is parametrised by a set of parameters β, which might be weights of neurons should fβ be implemented as a neural network. Denote by M a n× p matrix which contains n i.i.d. samples of Afβ(x), i.e. M = AZ with Z = (fβ(x1), . . . , fβ(xn))>. The optimisation of mutual information I(x, t) in min I(x; t)− λI(t; y) is then performed over M and β.
Given fβ and the above notation, the estimator of I(x; t) = 12 log |AΣxA> + I| becomes:
Î(x; t) = 1
2 log ∣∣∣∣ 1nMM> + I ∣∣∣∣ , (10)
which would further simplify to Î(x; t) = 12 ∑ i log(Dii + 1), if the pre-transformation fβ were indeed such that D := 1nMM > were diagonal. This is equivalent to the Sparse Gaussian Information Bottleneck model described above. Note that this means that the sparsity constraint in the Sparse Gaussian IB does not cause any loss of generality of the IB solution as long as the abstract
pre-transformation fβ makes it possible to diagonalise 1nMM > in Eq. (10). We can, however, approximate this case by forcing this diagonalisation in Eq. (10), i.e. by only considering the diagonal part of the matrix: I ′(x; t) = 12 log ∣∣diag( 1nMM> + I)∣∣ . We now explain why this approximation (replacing Î(x; t) with I ′(x; t)) is justified and how it leads to fβ finding a low-dimensional representation of the latent space. Note that for any positive definite matrix B, the determinant |B| is always upper bounded by ∏iBii = |diag(B)|, which is a consequence of Hadamard’s inequality. Thus, instead of minimising Î(x; t), we minimise an upper bound I ′(x; t) ≥ Î(x; t) in the Information Bottleneck cost function. Equality is obtained if the transformation fβ , which we assume to be part of an “end-to-end” optimisation procedure, indeed successfully diagonalised D = 1nMM
>. Note that equality in the Hadamard’s inequality is equivalent to D+ I being orthogonal, thus fβ is forced to find the “most orthogonal” representation of the inputs in the latent space. Using a highly flexible fβ (for instance, modelled by a deep neural network), we might approximate this situation reasonably well. This explains how the copula transformation translates to a low-dimensional representation of the latent space.
We indeed see disentanglement and sparse structure of the latent space learned by the copula DIB model by comparing it to the plain DIB without the copula transformation. We demonstrate it in Section 4.
4 EXPERIMENTS
We now proceed to experimentally verify the contributions of the copula Deep Information Bottleneck. The goal of the experiments is to test the impact of the copula transformation. To this end, we perform a series of pair-wise experiments, where DIB without and with (cDIB) the copula transformation are tested in the same set-up. We use two datasets (artificial and real-world) and devise multiple experimental set-ups.
4.1 ARTIFICIAL DATA
First, we construct an artificial dataset such that a high-dimensional latent space is needed for its reconstruction (the dataset is reconstructed when samples from the latent space spatially coincide with it in its high-dimensional space). We perform monotone transformations on this dataset and test the difference between DIB and cDIB on reconstruction capabilities as well as classification predictive score.
Dataset and set-up. The model used to generate the data consists of two input vectors x1 and x2 drawn form a uniform distribution defined on [0, 2] and vectors k1 and k2 drawn uniformly from [0, 1]. Additional inputs are xi=3...10 = ai∗k1+(1−ai)∗k2+0.3∗bi with ai, bi drawn from a uniform distribution defined on [0, 1]. All input vectors x1...10 form the input matrix X . Latent variables z1 = √ x21 + x 2 2 and z2 = z1 + x4 are defined and then normalised by dividing through their maximum value. Finally, random noise is added. Two target variables y1 = z2 ∗ cos(1.75 ∗ π ∗ z1) and y2 = z2 ∗ sin(1.75 ∗ π ∗ z1) are then calculated. y1 and y2 form a spiral if plotted in two dimensions. The angle and the radius of the spiral are highly correlated, which leads to the fact that a one-dimensional latent space can only reconstruct the backbone of the spiral. In order to reconstruct the details of the radial function, one has to use a latent space of at least two dimensions. We generate 200k samples from X and y. X is further transformed to beta densities using strictly increasing transformations. We split the samples into test (20k samples) and training (180k samples) sets. The generated samples are then transformed with the copula transformation (Eq. (6)) to X̃ and ỹ and split in the same way into test and training sets. This gives us the four input setsXtrain,Xtest, X̃train, X̃test and the four target sets ytrain, ytest, ỹtrain, ỹtest.
We use a latent layer with ten nodes that model the means of the ten-dimensional latent space t. The variance of the latent space is set to 1 for simplicity. The encoder as well as the decoder consist of a neural network with two fully-connected hidden layers with 50 nodes each. We use the softplus function as the activation function. Our model is trained using mini batches (size = 500) with the Adam optimiser (Kingma & Ba, 2014) for 70000 iterations using a learning rate of 0.0006.
Experiment 1. In the first experiment, we compare the information curves produced by the DIB and its copula augmentation (Figure 2(a)). To this end, we use the sets (Xtrain, ytrain) and (X̃train, ỹtrain) and record the values of I(x; t) and I(y; t) while multiplying the λ parameter every 500 iterations by 1.06 during training. One can observe an increase in the mutual information from approximately 6 in the DIB to approximately 11 in the copula DIB. At the same time, only two dimensions are used in the latent space t by the copula DIB. The version without copula does not provide competitive results despite using 10 out of 18 dimensions of the latent space t. In Appendix B, we extend this experiment to comparison of information curves for other pre-processing techniques as well as to subjecting the training data to monotonic transformations other than the beta transformation.
Experiment 2. Building on Experiment 1, we use the trained models for assessing their predictive quality on test data (Xtest, ytest) and (X̃test, ỹtest). We compute predictive scores of the latent space t with respect to the generated y in the form of mutual information I(t; y) for all values of the parameter λ. The resulting information curve shows an increased predictive capability of cDIB in Figure 2(b) and exhibits no difference to the information curve produced in Experiment 1. Thus, the increased mutual information reported in Experiment 1 cannot only be attributed to overfitting.
Experiment 3. In the third experiment, we qualitatively assess the reconstruction capability of cDIB compared to plain DIB (Figure 3). We choose the value of λ such that in both models two dimensions are active in the latent space. Figure 3(b) shows a detailed reconstruction of y. The reconstruction quality of plain DIB on test data results in a tight backbone which is not capable of reconstructing y (Figure 3(a)).
Experiment 4. We further inspect the information curves of DIB and cDIB by testing how the copula transformation adds resilience of the model against outliers and adversarial attacks in the training phase. To simulate an adversarial attack, we randomly choose 5% of all entries in the datasetsXtrain and X̃train and replace them with outliers by adding uniformly sampled noise within the range [1,5]. We again compute information curves for the training procedure and compare normal training with training with data subject to an attack for the copula and non-copula models. The results (Figure 4(a)) show that the copula model is more robust against outlier data than the plain one. We attribute this behaviour directly to the copula transformation, as ranks are less sensitive to outliers than raw data.
Experiment 5. In this experiment, we investigate how the copula transformation affects convergence of the neural networks making up the DIB. We focus on the encoder and track the values
of the loss function. Figure 4(b) shows a sample comparison of convergence of DIB and cDIB for λ = 100. One can see that the cDIB starts to converge around iteration no. 1000, whereas the plain DIB takes longer. This can be explained by the fact that in the copula model the marginals are normalised to the same range of normal quantiles by the copula transformation. This translates to higher convergence rates.
4.2 REAL-WORLD DATA
We continue analysing the impact of the copula transformation on the latent space of the DIB with a real-world dataset. We first report information curves analogous to Experiment 1 (Section 4.1) and proceed to inspect the latent spaces of both models along with sensitivity analysis with respect to λ.
Dataset and Set-up. We consider the unnormalised Communities and Crime dataset Lyons et al. (1998) from the UCI repository1. The dataset consisted of 125 predictive, 4 non-predictive and 18 target variables with 2215 samples in total. In a preprocessing step, we removed all missing values from the dataset. In the end, we used 1901 observations with 102 predictive and 18 target variables in our analysis.
1http://archive.ics.uci.edu/ml/datasets/communities+and+crime+unnormalized
We use a latent layer with 18 nodes that models the means of the 18-dimensional latent space t. Again, the variance of the latent and the output space is set to 1. The stochastic encoder as well as the stochastic decoder consist of a neural network with two fully-connected hidden layers with 100 nodes each. Softplus is employed as the activation function. The decoder uses a Gaussian likelihood. Our model is trained for 150000 iterations using mini batches with a size of 1255. As before, we use Adam (Kingma & Ba, 2014) with a learning rate of 0.0005.
Experiment 6. Analogously to Experiment 1 (Section 4.1), information curves stemming from the DIB and cDIB models have been computed. We record the values of I(x; t) and I(y; t) while multiplying the λ parameter every 500 iterations by 1.01 during training. Again, the information curve for the copula model yields larger values of mutual information, which we attribute to the increased flexibility of the model, as we pointed out in Section 3.3. In addition, the application of the copula transformation leads to a much lower number of used dimensions in the latent space. For example, copula DIB uses only four dimensions in the latent space for the highest λ values. DIB, on the other hand, needs eight dimensions in the latent space and nonetheless results in lower mutual information scores. In order to show that our information curves are significantly different, we perform a Kruskal-Wallis rank test (p-value of 1.6 ∗ 10−16).
Experiment 7. This experiment illustrates the difference in the disentanglement of the latent spaces of the DIB model with and without the copula transformation. We select two variables which yielded highest correlation with the target variable arsons and plot them along with their densities. In order to obtain the corresponding class labels (rainbow colours in Figure 6), we separate the values of arsons in eight equally-sized bins. A sample comparison of latent spaces of DIB and cDIB for λ = 21.55 is depicted in Figure 6. A more in-depth analysis of sensitivity of the learned latent space to λ is presented in Appendix A. The latent space t of DIB appears consistently less structured than that of cDIB, which is also reflected in the densities of the two plotted variables. In contrast, we can identify a much clearer structure in the latent space with respect to our previously calculated class labels.
5 CONCLUSION
We have presented a novel approach to compact representation learning of deep latent variable models. To this end, we showed that restoring invariance properties of the Deep Information Bottleneck with a copula transformation leads to disentanglement of the features in the latent space. Subsequently, we analysed how the copula transformation translates to sparsity in the latent space of the considered model. The proposed model allows for a simplified and fully non-parametric treatment of marginal distributions which has the advantage that it can be applied to distributions with arbitrary marginals. We evaluated our method on both artificial and real data. We showed that in practice the copula transformation leads to latent spaces that are disentangled, have an increased prediction capability and are resilient to adversarial attacks. All these properties are not sensitive to the only hyperparameter of the model, λ.
In Section 3.2, we motivated the copula transformation for the Deep Information Bottleneck with the lack of invariance properties present in the original Information Bottleneck model, making the copula augmentation particularly suited for the DIB. The relevance of the copula transformation, however, reaches beyond the variational autoencoder, as evidenced by e.g. resilience to adversarial attacks or the positive influence on convergence rates presented in Section 4. These advantages of our model that do not simply follow from restoring the Information Bottleneck properties to the DIB, but are additional benefits of the copula. The copula transformation thus promises to be a simple but powerful addition to the general deep learning toolbox.
ACKNOWLEDGMENTS
This work was partially supported by the Swiss National Science Foundation under grants CR32I2 159682 and 51MRP0 158328 (SystemsX.ch).
B EXTENSION OF EXPERIMENT 1
Building on Experiment 1 from Section 4, we again compare the information curves produced by the DIB and its copula augmentation. We compare the copula transformation with data normalisation (transformation to mean 0 and variance 1) in Figure 9(a). We also replace the beta transformation with gamma in the experimental set-up described in Section 4 and report the results in Figure 9(b). As in Experiment 1, one con see that the information curve for the copula version of DIB lies above the plain one. The latent space uses fewer dimensions as well. | 1. What is the main contribution of the paper regarding the Deep Information Bottleneck approach?
2. What are the strengths and weaknesses of the proposed method, particularly in its ability to preserve information and converge faster than the baseline?
3. Do you have any concerns about the experimentation and the scope of the data used in the study?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any suggestions for improving the paper, such as providing more details in the experimental section, comparing with standard preprocessing techniques, and targeting other common datasets for disentangling? | Review | Review
This paper identifies and proposes a fix for a shortcoming of the Deep Information Bottleneck approach, namely that the induced representation is not invariant to monotonic transform of the marginal distributions (as opposed to the mutual information on which it is based). The authors address this shortcoming by applying the DIB to a transformation of the data, obtained by a copula transform. This explicit approach is shown on synthetic experiments to preserve more information about the target, yield better reconstruction and converge faster than the baseline. The authors further develop a sparse extension to this Deep Copula Information Bottleneck (DCIB), which yields improved representations (in terms of disentangling and sparsity) on a UCI dataset.
(significance) This is a promising idea. This paper builds on the information theoretic perspective of representation learning, and makes progress towards characterizing what makes for a good representation. Invariance to transforms of the marginal distributions is clearly a useful property, and the proposed method seems effective in this regard.
Unfortunately, I do not believe the paper is ready for publication as it stands, as it suffers from lack of clarity and the experimentation is limited in scope.
(clarity) While Section 3.3 clearly defines the explicit form of the algorithm (where data and labels are essentially pre-processed via a copula transform), details regarding the “implicit form” are very scarce. From Section 3.4, it seems as though the authors are optimizing the form of the gaussian information bottleneck I(x,t), in the hopes of recovering an encoder $f_\beta(x)$ which gaussianizes the input (thus emulating the explicit transform) ? Could the authors clarify whether this interpretation is correct, or alternatively provide additional clarifying details ? There are also many missing details in the experimental section: how were the number of “active” components selected ? Which versions of the algorithm (explicit/implicit) were used for which experiments ? I believe explicit was used for Section 4.1, and implicit for 4.2 but again this needs to be spelled out more clearly. I would also like to see a discussion (and perhaps experimental comparison) to standard preprocessing techniques, such as PCA-whitening.
(quality) The experiments are interesting and seem well executed. Unfortunately, I do not think their scope (single synthetic, plus a single UCI dataset) is sufficient. While the gap in performance is significant on the synthetic task, this gap appears to shrink significantly when moving to the UCI dataset. How does this method perform for more realistic data, even e.g. MNIST ? I think it is crucial to highlight that the deficiencies of DIB matter in practice, and are not simply a theoretical consideration. Similarly, the representation analyzed in Figure 7 is promising, but again the authors could have targeted other common datasets for disentangling, e.g. the simple sprites dataset used in the beta-VAE paper. I would have also liked to see a more direct and systemic validation of the claims made in the paper. For example, the shortcomings of DIB identified in Section 3.1, 3.2 could have been verified more directly by plotting I(y,t) for various monotonic transformations of x. A direct comparison of the explicit and implicit forms of the algorithms would also also make for a stronger paper in my opinion.
Pros:
* Theoretically well motivated
* Promising results on synthetic task
* Potential for impact
Cons:
* Paper suffers from lack of clarity (method and experimental section)
* Lack of ablative / introspective experiments
* Weak empirical results (small or toy datasets only). |
ICLR | Title
Learning Sparse Latent Representations with the Deep Copula Information Bottleneck
Abstract
Deep latent variable models are powerful tools for representation learning. In this paper, we adopt the deep information bottleneck model, identify its shortcomings and propose a model that circumvents them. To this end, we apply a copula transformation which, by restoring the invariance properties of the information bottleneck method, leads to disentanglement of the features in the latent space. Building on that, we show how this transformation translates to sparsity of the latent space in the new model. We evaluate our method on artificial and real data.
1 INTRODUCTION
In recent years, deep latent variable models (Kingma & Welling, 2013; Rezende et al., 2014; Goodfellow et al., 2014) have become a popular toolbox in the machine learning community for a wide range of applications (Ledig et al., 2016; Reed et al., 2016; Isola et al., 2016). At the same time, the compact representation, sparsity and interpretability of the latent feature space have been identified as crucial elements of such models. In this context, multiple contributions have been made in the field of relevant feature extraction (Chalk et al., 2016; Alemi et al., 2016) and learning of disentangled representations of the latent space (Chen et al., 2016; Bouchacourt et al., 2017; Higgins et al., 2017).
In this paper, we consider latent space representation learning. We focus on disentangling features with the copula transformation and, building on that, on forcing a compact low-dimensional representation with a sparsity-inducing model formulation. To this end, we adopt the deep information bottleneck (DIB) model (Alemi et al., 2016) which combines the information bottleneck and variational autoencoder methods. The information bottleneck (IB) principle (Tishby et al., 2000) identifies relevant features with respect to a target variable. It takes two random vectors x and y and searches for a third random vector twhich, while compressing x, preserves information contained in y. A variational autoencoder (VAE) (Kingma & Welling, 2013; Rezende et al., 2014) is a generative model which learns a latent representation t of x by using the variational approach.
Although DIB produces good results in terms of image classification and adversarial attacks, it suffers from two major shortcomings. First, the IB solution only depends on the copula of x and y and is thus invariant to strictly monotone transformations of the marginal distributions. DIB does not preserve this invariance, which means that it is unnecessarily complex by also implicitly modelling the marginal distributions. We elaborate on the fundamental issues arising from this lack of invariance in Section 3. Second, the latent space of the IB is not sparse which results in the fact that a compact feature representation is not feasible.
Our contribution is two-fold: In the first step, we restore the invariance properties of the information bottleneck solution in the DIB. We achieve this by applying a transformation of x and y which makes the latent space only depend on the copula. This is a way to fully represent all the desirable features inherent to the IB formulation. The model is also simplified by ensuring robust and fully non-parametric treatment of the marginal distributions. In addition, the problems arising from the lack of invariance to monotone transformations of the marginals are solved. In the second step, once the invariance properties are restored, we exploit the sparse structure of the latent space of DIB. This is possible thanks to the copula transformation in conjunction with using the sparse parametrisation
∗These authors contributed equally.
of the information bottleneck, proposed by (Rey et al., 2014). It translates to a more compact latent space that results in a better interpretability of the model.
The remainder of this paper is structured as follows: In Section 2, we review publications on related models. Subsequently, in Section 3, we describe the proposed copula transformation and show how it fixes the shortcomings of DIB, as well as elaborate on the sparsity induced in the latent space. In Section 4, we present results of both synthetic and real data experiments. We conclude our paper in Section 5.
2 RELATED WORK
The IB principle was introduced by (Tishby et al., 2000). The idea is to compress the random vector x while retaining the information of the random vector y. This is achieved by solving the following variational problem: minp(t|x)I(x; t)−λI(t; y), with the assumption that y is conditionally independent of t given x, and where I stands for mutual information. In recent years, copula models were combined with the IB principle in (Rey & Roth, 2012) and extended to the sparse meta-Gaussian IB (Rey et al., 2014) to become invariant against strictly monotone transformations. Moreover, the IB method has been applied to the analysis of deep neural networks in (Tishby & Zaslavsky, 2015), by quantifying mutual information between the network layers and deriving an information theoretic limit on DNN efficiency.
The variational bound and reparametrisation trick for autoencoders were introduced in (Kingma & Welling, 2013; Rezende et al., 2014). The variational autoencoder aims to learn the posterior distribution of the latent space p(t|x) and the decoder p(x|t). The general idea of combining the two approaches is to identify the solution t of the information bottleneck with the latent space t of the variational autoencoder. Consequently, the terms I(x; t) and I(t; y) in the IB problem can be expressed in terms of the parametrised conditionals p(t|x), p(y|t). Variational lower bounds on the information bottleneck optimisation problem have been considered in (Chalk et al., 2016) and (Alemi et al., 2016). Both approaches, however, treat the differential entropy of the marginal distribution as a positive constant, which is not always justified (see Section 3). A related model is introduced in (Pereyra et al., 2017), where a penalty on the entropy of output distributions of neural networks is imposed. These approaches do not introduce the invariance against strictly monotone transformations and thus do not address the issues we identify in Section 3.
A sizeable amount of work on modelling the latent space of deep neural networks has been done. The authors of (Alvarez & Salzmann, 2016) propose the use of a group sparsity regulariser. Other techniques, e.g. in (Mozer & Smolensky, 1989) are based on removing neurons which have a limited impact on the output layer, but they frequently do not scale well with the overall network size. More recent approaches include training neural networks of smaller size to mimic a deep network (Hinton et al., 2015; Romero et al., 2014). In addition, multiple contributions have been proposed in the area of latent space disentanglement (Chen et al., 2016; Bouchacourt et al., 2017; Higgins et al., 2017; Denton & Birodkar, 2017). None of the approaches consider the influence of the copula on the modelled latent space.
Copula models have been proposed in the context of Bayesian variational methods in (Suh & Choi, 2016), (Tran et al., 2015) and (Han et al., 2016). The former approaches focus on treating the latent space variables as indicators of local approximations of the original space. None of the three approaches relate to the information bottleneck framework.
3 MODEL
3.1 FORMULATION
In order to specify our model, we start with a parametric formulation of the information bottleneck: max φ,θ −Iφ(t;x) + λIφ,θ(t; y), (1)
where I stands for mutual information with its parameters in the subscript. A parametric form of the conditionals pφ(t|x) and pθ(y|t) as well as the information bottleneck Markov chain t − x − y are assumed. A graphical illustration of the proposed model is depicted in Figure 1.
The two terms in Eq. (1) have the following forms:
Iφ(T ;X) = DKL (p(t|x)p(x)‖p(t)p(x)) = Ep(x)DKL (pφ(t|x)‖p(t)) , (2) and
Iφ,θ(T ;Y ) = DKL
([∫ p(t|y, x)p(y, x) dx ] ‖p(t)p(y) ) = Ep(x,y)Epφ(t|x) log pθ(y|t) + h(Y ),
(3)
because of the Markov assumption in the information bottleneck model pφ(t|x, y) = pφ(t|x). We denote with h(y) = −Ep(y)[log p(y)] the entropy for discrete y and the differential entropy for continuous y. We then assume a conditional independence copula and Gaussian margins:
pφ(t|x) = cT |X(u(t|x)|x) ∏ j pφj (tj |x) = ∏ j N(tj |µj(x), σ2j (x))
where tj is the jth marginal of t = (t1, . . . , td), ct|x is the copula density of t|x, u(t|x) := Ft|x(t|x) is the uniform density indexed by t|x, and the functions µj(x), σ2j (x) are implemented by deep networks. We make the same assumption about pθ(y|t).
3.2 MOTIVATION
As we stated in Section 1, the deep information bottleneck model derived in Section 3.1 is not invariant to strictly increasing transformations of the marginal distributions. The IB method is formulated in terms of mutual information I(x, y), which depends only on the copula and therefore does not depend on monotone transformations of the marginals: I(x, y) = MI(x, y) −MI(x) −MI(y), where MI(x), for x = (x1, . . . , xd), denotes the multi-information, which is equal to the negative copula entropy, as shown by Ma & Sun (2011):
MI(X) := DKL(p(x)‖ ∏ j pj(xj)) = ∫ cX(u(x)) log cX(u(x))du = −h(cX(u(x))). (4)
Issues with lack of invariance to marginal transformations.
1. On the encoder side (Eq. (2)), the optimisation is performed over the parametric conditional margins pφ(tj |x) in Iφ(t;x) = Ep(x)DKL (pφ(t|x)‖p(t)). When a monotone transformation xj → x̃j is applied, the required invariance property can only be guaranteed if the model for φ (in our case a deep network) is flexible enough to compensate for this transformation, which can be a severe problem in practice (see example in Section 4.1).
2. On the decoder side, assuming Gaussian margins in pθ(yj |t) might be inappropriate for modelling y if the domain of y is not equal to the real numbers, e.g. when y is defined only on a bounded
interval. If used in a generative way, the model might produce samples outside the domain of y. Even if other distributions than Gaussian are considered, such as truncated Gaussian, one still needs to make assumptions concerning the marginals. According to the IB formulation, such assumptions are unnecessary.
3. Also on the decoder side, we have: Iφ(t; y) = Ep(x,y)Epφ(t|x) log pθ(y|t) + h(y). The authors of Alemi et al. (2016) argue that since h(y) is constant, it can be ignored in computing Iφ(t; y). This is true for a fixed or for a discrete y, but not for the class of monotone transformations of y, which should be the case for a model specified with mutual informations only. Since the left hand side of this equation (Iφ(t; y)) is invariant against monotone transformations, and h(y) in general depends on monotone transformations, the first term on the right hand side (Ep(x,y)Epφ(t|x) log pθ(y|t)) cannot be invariant to monotone transformations. In fact, under such transformations, the differential entropy h(y) can take any value from −∞ to +∞, which can be seen easily by decomposing the entropy into the copula entropy and the sum of marginal entropies (here, j stands for the jth dimension):
h(y) = h(cy(u(y))) + ∑ j h(yj) = −MI(y) + ∑ j h(yj). (5)
The first term (i.e. the copula entropy which is equal to the negative multi-information, as in Eq. (4)) is a non-positive number. The marginal entropies h(yj) can take any value when using strictly increasing transformations (for instance, the marginal entropy of a uniform distribution on [a, b] is log(b − a)). As a consequence, the entropy term h(y) in Eq. (3) can be treated as a constant only either for one specific y or for discrete y, but not for all elements of the equivalence class containing all monotone transformations of y. Moreover, every such transformation would lead to different (I(x, t), I(y, t)) pairs in the information curve, which basically makes this curve arbitrary. Thus, h(y) being constant is a property that needs to be restored.
3.3 PROPOSED SOLUTION
The issues described in Section 3.2 can be fixed by using transformed variables (for a d dimensional x = (x1, . . . , xd), xj stands for the jth dimension):
x̃j = Φ −1(F̂ (xj)), xj = F̂ −1(Φ(x̃j)), (6)
where Φ is the Gaussian cdf and F̂ is the empirical cdf. The same transformation is applied to y. In the copula literature, these transformed variables are sometimes called normal scores. Note that the mapping is (approximately) invertible: xj = F̂−1(Φ(x̃j)), with F̂−1 being the empirical quantiles treated as a function (e.g. by linear interpolation). This transformation fixes the invariance problem on the encoding side (issue 1), as well as the problems on the decoding side: problem 2 disappeared because the transformed variables x̃j are standard normal distributed, and problem 3 disappeared because the decoder part (Eq. (3)) now has the form:
Ep(x̃,ỹ)Epφ(t|x̃) log pθ(ỹ|t) = Iφ(t; ỹ) +MI(ỹ)− ∑ j h(ỹj) = Iφ(t; ỹ)− h(cinv(u(ỹ))) (7)
where cinv(u(ỹ)) is indeed constant for all strictly increasing transformations applied to y.
Having solved the IB problem in the transformed space, we can go back to the original space by using the inverse transformation according to Eq. (6) xj = F̂−1(Φ(x̃j)). The resulting model is thus a variational autoencoder with x replaced by x̃ in the first term and y replaced by ỹ in the second term.
Technical details. We assume a simple prior p(t) = N (t; 0, I). Therefore, the KL divergence DKL (pφ(t|x̃)‖p(t)) is a divergence between two Gaussian distributions and admits an analytical form. We then estimate
I(t; x̃) = Ep(x̃)DKL (pφ(t|x̃)‖p(t)) ≈ 1
n ∑ i DKL (pφ(t|x̃i)‖p(t)) (8)
and all the gradients on (mini-)batches.
For the decoder side, Ep(x̃,ỹ)Epφ(t|x̃) log pθ(ỹ|t) is needed. We train our model using the backpropagation algorithm. However, this algorithm can only handle deterministic nodes. In order to overcome
this problem, we make use of the reparametrisation trick (Kingma & Welling, 2013; Rezende et al., 2014):
I(t; ỹ) = Ep(x̃,ỹ)E ∼N (0,I) ∑ j log pθ(ỹj |t = ~µj(x̃) + diag(σj(x̃)) ) + const., (9)
with ỹj = Φ−1(F̂ (yj)).
3.4 SPARSITY OF THE LATENT SPACE
In this section we explain how the sparsity constraint on the information bottleneck along with the copula transformation result in sparsity of the latent space t. We first introduce the Sparse Gaussian Information Bottleneck and subsequently show how augmenting it with the copula transformation leads to the sparse t.
Sparse Gaussian Information Bottleneck. Recall that the information bottleneck compresses x to a new variable t by minimising I(x; t)− λI(t; y). This ensures that some amount of information with respect to a second “relevance” variable y is preserved in the compression.
The assumption that x and y are jointly Gaussian-distributed leads to the Gaussian Information Bottleneck (Chechik et al., 2005) where the solution t can be proved to also be Gaussian distributed. In particular, if we denote the marginal distribution of x: x ∼ N (0,Σx), the optimal t is a noisy projection of x of the following form:
t = Ax+ ξ, ξ ∼ N (0, I) ⇒ t|x ∼ N (Ax, I), t ∼ N (0, AΣxA> + I). The mutual information between x and t is then equal to: I(x; t) = 12 log |AΣxA> + I|. In the sparse Gaussian Information Bottleneck, we additionally assume that A is diagonal, so that the compressed t is a sparse version of x. Intuitively, sparsity follows from the observation that for a pair of random variables x, x′, any full-rank projection Ax′ of x′ would lead to the same mutual information since I(x, x′) = I(x;Ax′), and a reduction in mutual information can only be achieved by a rank-deficient matrix A. For diagonal projections, this immediately implies sparsity of A.
Sparse latent space of the Deep Information Bottleneck. We now proceed to explain the sparsity induced in the latent space of the copula version of the DIB introduced in Section 3.3. We will assume a possibly general, abstract pre-transformation of x, fβ , which accounts for the encoder network along with the copula transformation of x. Then we will show how allowing for this abstract pre-transformation, in connection with the imposed sparsity constraint of the sparse information bottleneck described above, translates to the sparsity of the latent space of the copula DIB. By sparsity we understand the number of active neurons in the last layer of the encoder.
To this end, we use the Sparse Gaussian Information Bottleneck model described above. We analyse the encoder part of the DIB, described with I(x, t). Consider the general Gaussian Information Bottleneck (with x and y jointly Gaussian and a full matrixA) and the deterministic pre-transformation, fβ(x), performed on x. The pre-transformation is parametrised by a set of parameters β, which might be weights of neurons should fβ be implemented as a neural network. Denote by M a n× p matrix which contains n i.i.d. samples of Afβ(x), i.e. M = AZ with Z = (fβ(x1), . . . , fβ(xn))>. The optimisation of mutual information I(x, t) in min I(x; t)− λI(t; y) is then performed over M and β.
Given fβ and the above notation, the estimator of I(x; t) = 12 log |AΣxA> + I| becomes:
Î(x; t) = 1
2 log ∣∣∣∣ 1nMM> + I ∣∣∣∣ , (10)
which would further simplify to Î(x; t) = 12 ∑ i log(Dii + 1), if the pre-transformation fβ were indeed such that D := 1nMM > were diagonal. This is equivalent to the Sparse Gaussian Information Bottleneck model described above. Note that this means that the sparsity constraint in the Sparse Gaussian IB does not cause any loss of generality of the IB solution as long as the abstract
pre-transformation fβ makes it possible to diagonalise 1nMM > in Eq. (10). We can, however, approximate this case by forcing this diagonalisation in Eq. (10), i.e. by only considering the diagonal part of the matrix: I ′(x; t) = 12 log ∣∣diag( 1nMM> + I)∣∣ . We now explain why this approximation (replacing Î(x; t) with I ′(x; t)) is justified and how it leads to fβ finding a low-dimensional representation of the latent space. Note that for any positive definite matrix B, the determinant |B| is always upper bounded by ∏iBii = |diag(B)|, which is a consequence of Hadamard’s inequality. Thus, instead of minimising Î(x; t), we minimise an upper bound I ′(x; t) ≥ Î(x; t) in the Information Bottleneck cost function. Equality is obtained if the transformation fβ , which we assume to be part of an “end-to-end” optimisation procedure, indeed successfully diagonalised D = 1nMM
>. Note that equality in the Hadamard’s inequality is equivalent to D+ I being orthogonal, thus fβ is forced to find the “most orthogonal” representation of the inputs in the latent space. Using a highly flexible fβ (for instance, modelled by a deep neural network), we might approximate this situation reasonably well. This explains how the copula transformation translates to a low-dimensional representation of the latent space.
We indeed see disentanglement and sparse structure of the latent space learned by the copula DIB model by comparing it to the plain DIB without the copula transformation. We demonstrate it in Section 4.
4 EXPERIMENTS
We now proceed to experimentally verify the contributions of the copula Deep Information Bottleneck. The goal of the experiments is to test the impact of the copula transformation. To this end, we perform a series of pair-wise experiments, where DIB without and with (cDIB) the copula transformation are tested in the same set-up. We use two datasets (artificial and real-world) and devise multiple experimental set-ups.
4.1 ARTIFICIAL DATA
First, we construct an artificial dataset such that a high-dimensional latent space is needed for its reconstruction (the dataset is reconstructed when samples from the latent space spatially coincide with it in its high-dimensional space). We perform monotone transformations on this dataset and test the difference between DIB and cDIB on reconstruction capabilities as well as classification predictive score.
Dataset and set-up. The model used to generate the data consists of two input vectors x1 and x2 drawn form a uniform distribution defined on [0, 2] and vectors k1 and k2 drawn uniformly from [0, 1]. Additional inputs are xi=3...10 = ai∗k1+(1−ai)∗k2+0.3∗bi with ai, bi drawn from a uniform distribution defined on [0, 1]. All input vectors x1...10 form the input matrix X . Latent variables z1 = √ x21 + x 2 2 and z2 = z1 + x4 are defined and then normalised by dividing through their maximum value. Finally, random noise is added. Two target variables y1 = z2 ∗ cos(1.75 ∗ π ∗ z1) and y2 = z2 ∗ sin(1.75 ∗ π ∗ z1) are then calculated. y1 and y2 form a spiral if plotted in two dimensions. The angle and the radius of the spiral are highly correlated, which leads to the fact that a one-dimensional latent space can only reconstruct the backbone of the spiral. In order to reconstruct the details of the radial function, one has to use a latent space of at least two dimensions. We generate 200k samples from X and y. X is further transformed to beta densities using strictly increasing transformations. We split the samples into test (20k samples) and training (180k samples) sets. The generated samples are then transformed with the copula transformation (Eq. (6)) to X̃ and ỹ and split in the same way into test and training sets. This gives us the four input setsXtrain,Xtest, X̃train, X̃test and the four target sets ytrain, ytest, ỹtrain, ỹtest.
We use a latent layer with ten nodes that model the means of the ten-dimensional latent space t. The variance of the latent space is set to 1 for simplicity. The encoder as well as the decoder consist of a neural network with two fully-connected hidden layers with 50 nodes each. We use the softplus function as the activation function. Our model is trained using mini batches (size = 500) with the Adam optimiser (Kingma & Ba, 2014) for 70000 iterations using a learning rate of 0.0006.
Experiment 1. In the first experiment, we compare the information curves produced by the DIB and its copula augmentation (Figure 2(a)). To this end, we use the sets (Xtrain, ytrain) and (X̃train, ỹtrain) and record the values of I(x; t) and I(y; t) while multiplying the λ parameter every 500 iterations by 1.06 during training. One can observe an increase in the mutual information from approximately 6 in the DIB to approximately 11 in the copula DIB. At the same time, only two dimensions are used in the latent space t by the copula DIB. The version without copula does not provide competitive results despite using 10 out of 18 dimensions of the latent space t. In Appendix B, we extend this experiment to comparison of information curves for other pre-processing techniques as well as to subjecting the training data to monotonic transformations other than the beta transformation.
Experiment 2. Building on Experiment 1, we use the trained models for assessing their predictive quality on test data (Xtest, ytest) and (X̃test, ỹtest). We compute predictive scores of the latent space t with respect to the generated y in the form of mutual information I(t; y) for all values of the parameter λ. The resulting information curve shows an increased predictive capability of cDIB in Figure 2(b) and exhibits no difference to the information curve produced in Experiment 1. Thus, the increased mutual information reported in Experiment 1 cannot only be attributed to overfitting.
Experiment 3. In the third experiment, we qualitatively assess the reconstruction capability of cDIB compared to plain DIB (Figure 3). We choose the value of λ such that in both models two dimensions are active in the latent space. Figure 3(b) shows a detailed reconstruction of y. The reconstruction quality of plain DIB on test data results in a tight backbone which is not capable of reconstructing y (Figure 3(a)).
Experiment 4. We further inspect the information curves of DIB and cDIB by testing how the copula transformation adds resilience of the model against outliers and adversarial attacks in the training phase. To simulate an adversarial attack, we randomly choose 5% of all entries in the datasetsXtrain and X̃train and replace them with outliers by adding uniformly sampled noise within the range [1,5]. We again compute information curves for the training procedure and compare normal training with training with data subject to an attack for the copula and non-copula models. The results (Figure 4(a)) show that the copula model is more robust against outlier data than the plain one. We attribute this behaviour directly to the copula transformation, as ranks are less sensitive to outliers than raw data.
Experiment 5. In this experiment, we investigate how the copula transformation affects convergence of the neural networks making up the DIB. We focus on the encoder and track the values
of the loss function. Figure 4(b) shows a sample comparison of convergence of DIB and cDIB for λ = 100. One can see that the cDIB starts to converge around iteration no. 1000, whereas the plain DIB takes longer. This can be explained by the fact that in the copula model the marginals are normalised to the same range of normal quantiles by the copula transformation. This translates to higher convergence rates.
4.2 REAL-WORLD DATA
We continue analysing the impact of the copula transformation on the latent space of the DIB with a real-world dataset. We first report information curves analogous to Experiment 1 (Section 4.1) and proceed to inspect the latent spaces of both models along with sensitivity analysis with respect to λ.
Dataset and Set-up. We consider the unnormalised Communities and Crime dataset Lyons et al. (1998) from the UCI repository1. The dataset consisted of 125 predictive, 4 non-predictive and 18 target variables with 2215 samples in total. In a preprocessing step, we removed all missing values from the dataset. In the end, we used 1901 observations with 102 predictive and 18 target variables in our analysis.
1http://archive.ics.uci.edu/ml/datasets/communities+and+crime+unnormalized
We use a latent layer with 18 nodes that models the means of the 18-dimensional latent space t. Again, the variance of the latent and the output space is set to 1. The stochastic encoder as well as the stochastic decoder consist of a neural network with two fully-connected hidden layers with 100 nodes each. Softplus is employed as the activation function. The decoder uses a Gaussian likelihood. Our model is trained for 150000 iterations using mini batches with a size of 1255. As before, we use Adam (Kingma & Ba, 2014) with a learning rate of 0.0005.
Experiment 6. Analogously to Experiment 1 (Section 4.1), information curves stemming from the DIB and cDIB models have been computed. We record the values of I(x; t) and I(y; t) while multiplying the λ parameter every 500 iterations by 1.01 during training. Again, the information curve for the copula model yields larger values of mutual information, which we attribute to the increased flexibility of the model, as we pointed out in Section 3.3. In addition, the application of the copula transformation leads to a much lower number of used dimensions in the latent space. For example, copula DIB uses only four dimensions in the latent space for the highest λ values. DIB, on the other hand, needs eight dimensions in the latent space and nonetheless results in lower mutual information scores. In order to show that our information curves are significantly different, we perform a Kruskal-Wallis rank test (p-value of 1.6 ∗ 10−16).
Experiment 7. This experiment illustrates the difference in the disentanglement of the latent spaces of the DIB model with and without the copula transformation. We select two variables which yielded highest correlation with the target variable arsons and plot them along with their densities. In order to obtain the corresponding class labels (rainbow colours in Figure 6), we separate the values of arsons in eight equally-sized bins. A sample comparison of latent spaces of DIB and cDIB for λ = 21.55 is depicted in Figure 6. A more in-depth analysis of sensitivity of the learned latent space to λ is presented in Appendix A. The latent space t of DIB appears consistently less structured than that of cDIB, which is also reflected in the densities of the two plotted variables. In contrast, we can identify a much clearer structure in the latent space with respect to our previously calculated class labels.
5 CONCLUSION
We have presented a novel approach to compact representation learning of deep latent variable models. To this end, we showed that restoring invariance properties of the Deep Information Bottleneck with a copula transformation leads to disentanglement of the features in the latent space. Subsequently, we analysed how the copula transformation translates to sparsity in the latent space of the considered model. The proposed model allows for a simplified and fully non-parametric treatment of marginal distributions which has the advantage that it can be applied to distributions with arbitrary marginals. We evaluated our method on both artificial and real data. We showed that in practice the copula transformation leads to latent spaces that are disentangled, have an increased prediction capability and are resilient to adversarial attacks. All these properties are not sensitive to the only hyperparameter of the model, λ.
In Section 3.2, we motivated the copula transformation for the Deep Information Bottleneck with the lack of invariance properties present in the original Information Bottleneck model, making the copula augmentation particularly suited for the DIB. The relevance of the copula transformation, however, reaches beyond the variational autoencoder, as evidenced by e.g. resilience to adversarial attacks or the positive influence on convergence rates presented in Section 4. These advantages of our model that do not simply follow from restoring the Information Bottleneck properties to the DIB, but are additional benefits of the copula. The copula transformation thus promises to be a simple but powerful addition to the general deep learning toolbox.
ACKNOWLEDGMENTS
This work was partially supported by the Swiss National Science Foundation under grants CR32I2 159682 and 51MRP0 158328 (SystemsX.ch).
B EXTENSION OF EXPERIMENT 1
Building on Experiment 1 from Section 4, we again compare the information curves produced by the DIB and its copula augmentation. We compare the copula transformation with data normalisation (transformation to mean 0 and variance 1) in Figure 9(a). We also replace the beta transformation with gamma in the experimental set-up described in Section 4 and report the results in Figure 9(b). As in Experiment 1, one con see that the information curve for the copula version of DIB lies above the plain one. The latent space uses fewer dimensions as well. | 1. What is the focus and contribution of the paper on sparse latent representation learning?
2. What are the strengths of the proposed algorithm, particularly in its formulation and solution?
3. Do you have any questions regarding the paper's unclear points, such as the use of explicit or implicit transforms, the form of f_beta, and the choice of lambda?
4. How does the reviewer assess the significance of the experimental results, including their demonstration of higher information curves, more compact representation, and better reconstruction quality?
5. Does the reviewer have any minor comments or suggestions for improving the paper's clarity and presentation? | Review | Review
This paper presents a sparse latent representation learning algorithm based on an information theoretic objective formulated through meta-Gaussian information bottleneck and solved via variational auto-encoder stochastic optimization. The authors suggest Gaussianify the data using copula transformation and further adopt a diagonal determinant approximation with justification of minimizing an upper bound of mutual information. Experiments include both artificial data and real data.
The paper is unclear at some places and writing gets confusing. For example, it is unclear whether and when explicit or implicit transforms are used for x and y in the experiments, and the discussion at the end of Section 3.3 also sounds confusing. It would be more helpful if the author can make those points more clear and offer some guidance about the choices between explicit and implicit transform in practice. Moreover, what is the form of f_beta and how beta is optimized? In the first equation on page 5, is tilde y involved? How to choose lambda?
If MI is invariant to monotone transformations and information curves are determined by MIs, why “transformations basically makes information curve arbitrary”? Can you elaborate?
Although the experimental results demonstrate that the proposed approach with copula transformation yields higher information curves, more compact representation and better reconstruction quality, it would be more significant if the author can show whether these would necessarily lead to any improvements on other goals such as classification accuracy or robustness under adversarial attacks.
Minor comments:
- What is the meaning of the dashed lines and the solid lines respectively in Figure 1?
- Section 3.3 at the bottom of page 4: what is tilde t_j? and x in the second term? Is there a typo?
- typo, find the “most orthogonal” representation if the inputs -> of the inputs
Overall, the main idea of this paper is interesting and well motivated and but the technical contribution seems incremental. The paper suffers from lack of clarity at several places and the experimental results are convincing but not strong enough.
***************
Updates:
***************
The authors have clarified some questions that I had and further demonstrated the benefits of copula transform with new experiments in the revised paper. The new results are quite informative and addressed some of the concerns raised by me and other reviewers. I have updated my score to 6 accordingly. |
ICLR | Title
Learning Sparse Latent Representations with the Deep Copula Information Bottleneck
Abstract
Deep latent variable models are powerful tools for representation learning. In this paper, we adopt the deep information bottleneck model, identify its shortcomings and propose a model that circumvents them. To this end, we apply a copula transformation which, by restoring the invariance properties of the information bottleneck method, leads to disentanglement of the features in the latent space. Building on that, we show how this transformation translates to sparsity of the latent space in the new model. We evaluate our method on artificial and real data.
1 INTRODUCTION
In recent years, deep latent variable models (Kingma & Welling, 2013; Rezende et al., 2014; Goodfellow et al., 2014) have become a popular toolbox in the machine learning community for a wide range of applications (Ledig et al., 2016; Reed et al., 2016; Isola et al., 2016). At the same time, the compact representation, sparsity and interpretability of the latent feature space have been identified as crucial elements of such models. In this context, multiple contributions have been made in the field of relevant feature extraction (Chalk et al., 2016; Alemi et al., 2016) and learning of disentangled representations of the latent space (Chen et al., 2016; Bouchacourt et al., 2017; Higgins et al., 2017).
In this paper, we consider latent space representation learning. We focus on disentangling features with the copula transformation and, building on that, on forcing a compact low-dimensional representation with a sparsity-inducing model formulation. To this end, we adopt the deep information bottleneck (DIB) model (Alemi et al., 2016) which combines the information bottleneck and variational autoencoder methods. The information bottleneck (IB) principle (Tishby et al., 2000) identifies relevant features with respect to a target variable. It takes two random vectors x and y and searches for a third random vector twhich, while compressing x, preserves information contained in y. A variational autoencoder (VAE) (Kingma & Welling, 2013; Rezende et al., 2014) is a generative model which learns a latent representation t of x by using the variational approach.
Although DIB produces good results in terms of image classification and adversarial attacks, it suffers from two major shortcomings. First, the IB solution only depends on the copula of x and y and is thus invariant to strictly monotone transformations of the marginal distributions. DIB does not preserve this invariance, which means that it is unnecessarily complex by also implicitly modelling the marginal distributions. We elaborate on the fundamental issues arising from this lack of invariance in Section 3. Second, the latent space of the IB is not sparse which results in the fact that a compact feature representation is not feasible.
Our contribution is two-fold: In the first step, we restore the invariance properties of the information bottleneck solution in the DIB. We achieve this by applying a transformation of x and y which makes the latent space only depend on the copula. This is a way to fully represent all the desirable features inherent to the IB formulation. The model is also simplified by ensuring robust and fully non-parametric treatment of the marginal distributions. In addition, the problems arising from the lack of invariance to monotone transformations of the marginals are solved. In the second step, once the invariance properties are restored, we exploit the sparse structure of the latent space of DIB. This is possible thanks to the copula transformation in conjunction with using the sparse parametrisation
∗These authors contributed equally.
of the information bottleneck, proposed by (Rey et al., 2014). It translates to a more compact latent space that results in a better interpretability of the model.
The remainder of this paper is structured as follows: In Section 2, we review publications on related models. Subsequently, in Section 3, we describe the proposed copula transformation and show how it fixes the shortcomings of DIB, as well as elaborate on the sparsity induced in the latent space. In Section 4, we present results of both synthetic and real data experiments. We conclude our paper in Section 5.
2 RELATED WORK
The IB principle was introduced by (Tishby et al., 2000). The idea is to compress the random vector x while retaining the information of the random vector y. This is achieved by solving the following variational problem: minp(t|x)I(x; t)−λI(t; y), with the assumption that y is conditionally independent of t given x, and where I stands for mutual information. In recent years, copula models were combined with the IB principle in (Rey & Roth, 2012) and extended to the sparse meta-Gaussian IB (Rey et al., 2014) to become invariant against strictly monotone transformations. Moreover, the IB method has been applied to the analysis of deep neural networks in (Tishby & Zaslavsky, 2015), by quantifying mutual information between the network layers and deriving an information theoretic limit on DNN efficiency.
The variational bound and reparametrisation trick for autoencoders were introduced in (Kingma & Welling, 2013; Rezende et al., 2014). The variational autoencoder aims to learn the posterior distribution of the latent space p(t|x) and the decoder p(x|t). The general idea of combining the two approaches is to identify the solution t of the information bottleneck with the latent space t of the variational autoencoder. Consequently, the terms I(x; t) and I(t; y) in the IB problem can be expressed in terms of the parametrised conditionals p(t|x), p(y|t). Variational lower bounds on the information bottleneck optimisation problem have been considered in (Chalk et al., 2016) and (Alemi et al., 2016). Both approaches, however, treat the differential entropy of the marginal distribution as a positive constant, which is not always justified (see Section 3). A related model is introduced in (Pereyra et al., 2017), where a penalty on the entropy of output distributions of neural networks is imposed. These approaches do not introduce the invariance against strictly monotone transformations and thus do not address the issues we identify in Section 3.
A sizeable amount of work on modelling the latent space of deep neural networks has been done. The authors of (Alvarez & Salzmann, 2016) propose the use of a group sparsity regulariser. Other techniques, e.g. in (Mozer & Smolensky, 1989) are based on removing neurons which have a limited impact on the output layer, but they frequently do not scale well with the overall network size. More recent approaches include training neural networks of smaller size to mimic a deep network (Hinton et al., 2015; Romero et al., 2014). In addition, multiple contributions have been proposed in the area of latent space disentanglement (Chen et al., 2016; Bouchacourt et al., 2017; Higgins et al., 2017; Denton & Birodkar, 2017). None of the approaches consider the influence of the copula on the modelled latent space.
Copula models have been proposed in the context of Bayesian variational methods in (Suh & Choi, 2016), (Tran et al., 2015) and (Han et al., 2016). The former approaches focus on treating the latent space variables as indicators of local approximations of the original space. None of the three approaches relate to the information bottleneck framework.
3 MODEL
3.1 FORMULATION
In order to specify our model, we start with a parametric formulation of the information bottleneck: max φ,θ −Iφ(t;x) + λIφ,θ(t; y), (1)
where I stands for mutual information with its parameters in the subscript. A parametric form of the conditionals pφ(t|x) and pθ(y|t) as well as the information bottleneck Markov chain t − x − y are assumed. A graphical illustration of the proposed model is depicted in Figure 1.
The two terms in Eq. (1) have the following forms:
Iφ(T ;X) = DKL (p(t|x)p(x)‖p(t)p(x)) = Ep(x)DKL (pφ(t|x)‖p(t)) , (2) and
Iφ,θ(T ;Y ) = DKL
([∫ p(t|y, x)p(y, x) dx ] ‖p(t)p(y) ) = Ep(x,y)Epφ(t|x) log pθ(y|t) + h(Y ),
(3)
because of the Markov assumption in the information bottleneck model pφ(t|x, y) = pφ(t|x). We denote with h(y) = −Ep(y)[log p(y)] the entropy for discrete y and the differential entropy for continuous y. We then assume a conditional independence copula and Gaussian margins:
pφ(t|x) = cT |X(u(t|x)|x) ∏ j pφj (tj |x) = ∏ j N(tj |µj(x), σ2j (x))
where tj is the jth marginal of t = (t1, . . . , td), ct|x is the copula density of t|x, u(t|x) := Ft|x(t|x) is the uniform density indexed by t|x, and the functions µj(x), σ2j (x) are implemented by deep networks. We make the same assumption about pθ(y|t).
3.2 MOTIVATION
As we stated in Section 1, the deep information bottleneck model derived in Section 3.1 is not invariant to strictly increasing transformations of the marginal distributions. The IB method is formulated in terms of mutual information I(x, y), which depends only on the copula and therefore does not depend on monotone transformations of the marginals: I(x, y) = MI(x, y) −MI(x) −MI(y), where MI(x), for x = (x1, . . . , xd), denotes the multi-information, which is equal to the negative copula entropy, as shown by Ma & Sun (2011):
MI(X) := DKL(p(x)‖ ∏ j pj(xj)) = ∫ cX(u(x)) log cX(u(x))du = −h(cX(u(x))). (4)
Issues with lack of invariance to marginal transformations.
1. On the encoder side (Eq. (2)), the optimisation is performed over the parametric conditional margins pφ(tj |x) in Iφ(t;x) = Ep(x)DKL (pφ(t|x)‖p(t)). When a monotone transformation xj → x̃j is applied, the required invariance property can only be guaranteed if the model for φ (in our case a deep network) is flexible enough to compensate for this transformation, which can be a severe problem in practice (see example in Section 4.1).
2. On the decoder side, assuming Gaussian margins in pθ(yj |t) might be inappropriate for modelling y if the domain of y is not equal to the real numbers, e.g. when y is defined only on a bounded
interval. If used in a generative way, the model might produce samples outside the domain of y. Even if other distributions than Gaussian are considered, such as truncated Gaussian, one still needs to make assumptions concerning the marginals. According to the IB formulation, such assumptions are unnecessary.
3. Also on the decoder side, we have: Iφ(t; y) = Ep(x,y)Epφ(t|x) log pθ(y|t) + h(y). The authors of Alemi et al. (2016) argue that since h(y) is constant, it can be ignored in computing Iφ(t; y). This is true for a fixed or for a discrete y, but not for the class of monotone transformations of y, which should be the case for a model specified with mutual informations only. Since the left hand side of this equation (Iφ(t; y)) is invariant against monotone transformations, and h(y) in general depends on monotone transformations, the first term on the right hand side (Ep(x,y)Epφ(t|x) log pθ(y|t)) cannot be invariant to monotone transformations. In fact, under such transformations, the differential entropy h(y) can take any value from −∞ to +∞, which can be seen easily by decomposing the entropy into the copula entropy and the sum of marginal entropies (here, j stands for the jth dimension):
h(y) = h(cy(u(y))) + ∑ j h(yj) = −MI(y) + ∑ j h(yj). (5)
The first term (i.e. the copula entropy which is equal to the negative multi-information, as in Eq. (4)) is a non-positive number. The marginal entropies h(yj) can take any value when using strictly increasing transformations (for instance, the marginal entropy of a uniform distribution on [a, b] is log(b − a)). As a consequence, the entropy term h(y) in Eq. (3) can be treated as a constant only either for one specific y or for discrete y, but not for all elements of the equivalence class containing all monotone transformations of y. Moreover, every such transformation would lead to different (I(x, t), I(y, t)) pairs in the information curve, which basically makes this curve arbitrary. Thus, h(y) being constant is a property that needs to be restored.
3.3 PROPOSED SOLUTION
The issues described in Section 3.2 can be fixed by using transformed variables (for a d dimensional x = (x1, . . . , xd), xj stands for the jth dimension):
x̃j = Φ −1(F̂ (xj)), xj = F̂ −1(Φ(x̃j)), (6)
where Φ is the Gaussian cdf and F̂ is the empirical cdf. The same transformation is applied to y. In the copula literature, these transformed variables are sometimes called normal scores. Note that the mapping is (approximately) invertible: xj = F̂−1(Φ(x̃j)), with F̂−1 being the empirical quantiles treated as a function (e.g. by linear interpolation). This transformation fixes the invariance problem on the encoding side (issue 1), as well as the problems on the decoding side: problem 2 disappeared because the transformed variables x̃j are standard normal distributed, and problem 3 disappeared because the decoder part (Eq. (3)) now has the form:
Ep(x̃,ỹ)Epφ(t|x̃) log pθ(ỹ|t) = Iφ(t; ỹ) +MI(ỹ)− ∑ j h(ỹj) = Iφ(t; ỹ)− h(cinv(u(ỹ))) (7)
where cinv(u(ỹ)) is indeed constant for all strictly increasing transformations applied to y.
Having solved the IB problem in the transformed space, we can go back to the original space by using the inverse transformation according to Eq. (6) xj = F̂−1(Φ(x̃j)). The resulting model is thus a variational autoencoder with x replaced by x̃ in the first term and y replaced by ỹ in the second term.
Technical details. We assume a simple prior p(t) = N (t; 0, I). Therefore, the KL divergence DKL (pφ(t|x̃)‖p(t)) is a divergence between two Gaussian distributions and admits an analytical form. We then estimate
I(t; x̃) = Ep(x̃)DKL (pφ(t|x̃)‖p(t)) ≈ 1
n ∑ i DKL (pφ(t|x̃i)‖p(t)) (8)
and all the gradients on (mini-)batches.
For the decoder side, Ep(x̃,ỹ)Epφ(t|x̃) log pθ(ỹ|t) is needed. We train our model using the backpropagation algorithm. However, this algorithm can only handle deterministic nodes. In order to overcome
this problem, we make use of the reparametrisation trick (Kingma & Welling, 2013; Rezende et al., 2014):
I(t; ỹ) = Ep(x̃,ỹ)E ∼N (0,I) ∑ j log pθ(ỹj |t = ~µj(x̃) + diag(σj(x̃)) ) + const., (9)
with ỹj = Φ−1(F̂ (yj)).
3.4 SPARSITY OF THE LATENT SPACE
In this section we explain how the sparsity constraint on the information bottleneck along with the copula transformation result in sparsity of the latent space t. We first introduce the Sparse Gaussian Information Bottleneck and subsequently show how augmenting it with the copula transformation leads to the sparse t.
Sparse Gaussian Information Bottleneck. Recall that the information bottleneck compresses x to a new variable t by minimising I(x; t)− λI(t; y). This ensures that some amount of information with respect to a second “relevance” variable y is preserved in the compression.
The assumption that x and y are jointly Gaussian-distributed leads to the Gaussian Information Bottleneck (Chechik et al., 2005) where the solution t can be proved to also be Gaussian distributed. In particular, if we denote the marginal distribution of x: x ∼ N (0,Σx), the optimal t is a noisy projection of x of the following form:
t = Ax+ ξ, ξ ∼ N (0, I) ⇒ t|x ∼ N (Ax, I), t ∼ N (0, AΣxA> + I). The mutual information between x and t is then equal to: I(x; t) = 12 log |AΣxA> + I|. In the sparse Gaussian Information Bottleneck, we additionally assume that A is diagonal, so that the compressed t is a sparse version of x. Intuitively, sparsity follows from the observation that for a pair of random variables x, x′, any full-rank projection Ax′ of x′ would lead to the same mutual information since I(x, x′) = I(x;Ax′), and a reduction in mutual information can only be achieved by a rank-deficient matrix A. For diagonal projections, this immediately implies sparsity of A.
Sparse latent space of the Deep Information Bottleneck. We now proceed to explain the sparsity induced in the latent space of the copula version of the DIB introduced in Section 3.3. We will assume a possibly general, abstract pre-transformation of x, fβ , which accounts for the encoder network along with the copula transformation of x. Then we will show how allowing for this abstract pre-transformation, in connection with the imposed sparsity constraint of the sparse information bottleneck described above, translates to the sparsity of the latent space of the copula DIB. By sparsity we understand the number of active neurons in the last layer of the encoder.
To this end, we use the Sparse Gaussian Information Bottleneck model described above. We analyse the encoder part of the DIB, described with I(x, t). Consider the general Gaussian Information Bottleneck (with x and y jointly Gaussian and a full matrixA) and the deterministic pre-transformation, fβ(x), performed on x. The pre-transformation is parametrised by a set of parameters β, which might be weights of neurons should fβ be implemented as a neural network. Denote by M a n× p matrix which contains n i.i.d. samples of Afβ(x), i.e. M = AZ with Z = (fβ(x1), . . . , fβ(xn))>. The optimisation of mutual information I(x, t) in min I(x; t)− λI(t; y) is then performed over M and β.
Given fβ and the above notation, the estimator of I(x; t) = 12 log |AΣxA> + I| becomes:
Î(x; t) = 1
2 log ∣∣∣∣ 1nMM> + I ∣∣∣∣ , (10)
which would further simplify to Î(x; t) = 12 ∑ i log(Dii + 1), if the pre-transformation fβ were indeed such that D := 1nMM > were diagonal. This is equivalent to the Sparse Gaussian Information Bottleneck model described above. Note that this means that the sparsity constraint in the Sparse Gaussian IB does not cause any loss of generality of the IB solution as long as the abstract
pre-transformation fβ makes it possible to diagonalise 1nMM > in Eq. (10). We can, however, approximate this case by forcing this diagonalisation in Eq. (10), i.e. by only considering the diagonal part of the matrix: I ′(x; t) = 12 log ∣∣diag( 1nMM> + I)∣∣ . We now explain why this approximation (replacing Î(x; t) with I ′(x; t)) is justified and how it leads to fβ finding a low-dimensional representation of the latent space. Note that for any positive definite matrix B, the determinant |B| is always upper bounded by ∏iBii = |diag(B)|, which is a consequence of Hadamard’s inequality. Thus, instead of minimising Î(x; t), we minimise an upper bound I ′(x; t) ≥ Î(x; t) in the Information Bottleneck cost function. Equality is obtained if the transformation fβ , which we assume to be part of an “end-to-end” optimisation procedure, indeed successfully diagonalised D = 1nMM
>. Note that equality in the Hadamard’s inequality is equivalent to D+ I being orthogonal, thus fβ is forced to find the “most orthogonal” representation of the inputs in the latent space. Using a highly flexible fβ (for instance, modelled by a deep neural network), we might approximate this situation reasonably well. This explains how the copula transformation translates to a low-dimensional representation of the latent space.
We indeed see disentanglement and sparse structure of the latent space learned by the copula DIB model by comparing it to the plain DIB without the copula transformation. We demonstrate it in Section 4.
4 EXPERIMENTS
We now proceed to experimentally verify the contributions of the copula Deep Information Bottleneck. The goal of the experiments is to test the impact of the copula transformation. To this end, we perform a series of pair-wise experiments, where DIB without and with (cDIB) the copula transformation are tested in the same set-up. We use two datasets (artificial and real-world) and devise multiple experimental set-ups.
4.1 ARTIFICIAL DATA
First, we construct an artificial dataset such that a high-dimensional latent space is needed for its reconstruction (the dataset is reconstructed when samples from the latent space spatially coincide with it in its high-dimensional space). We perform monotone transformations on this dataset and test the difference between DIB and cDIB on reconstruction capabilities as well as classification predictive score.
Dataset and set-up. The model used to generate the data consists of two input vectors x1 and x2 drawn form a uniform distribution defined on [0, 2] and vectors k1 and k2 drawn uniformly from [0, 1]. Additional inputs are xi=3...10 = ai∗k1+(1−ai)∗k2+0.3∗bi with ai, bi drawn from a uniform distribution defined on [0, 1]. All input vectors x1...10 form the input matrix X . Latent variables z1 = √ x21 + x 2 2 and z2 = z1 + x4 are defined and then normalised by dividing through their maximum value. Finally, random noise is added. Two target variables y1 = z2 ∗ cos(1.75 ∗ π ∗ z1) and y2 = z2 ∗ sin(1.75 ∗ π ∗ z1) are then calculated. y1 and y2 form a spiral if plotted in two dimensions. The angle and the radius of the spiral are highly correlated, which leads to the fact that a one-dimensional latent space can only reconstruct the backbone of the spiral. In order to reconstruct the details of the radial function, one has to use a latent space of at least two dimensions. We generate 200k samples from X and y. X is further transformed to beta densities using strictly increasing transformations. We split the samples into test (20k samples) and training (180k samples) sets. The generated samples are then transformed with the copula transformation (Eq. (6)) to X̃ and ỹ and split in the same way into test and training sets. This gives us the four input setsXtrain,Xtest, X̃train, X̃test and the four target sets ytrain, ytest, ỹtrain, ỹtest.
We use a latent layer with ten nodes that model the means of the ten-dimensional latent space t. The variance of the latent space is set to 1 for simplicity. The encoder as well as the decoder consist of a neural network with two fully-connected hidden layers with 50 nodes each. We use the softplus function as the activation function. Our model is trained using mini batches (size = 500) with the Adam optimiser (Kingma & Ba, 2014) for 70000 iterations using a learning rate of 0.0006.
Experiment 1. In the first experiment, we compare the information curves produced by the DIB and its copula augmentation (Figure 2(a)). To this end, we use the sets (Xtrain, ytrain) and (X̃train, ỹtrain) and record the values of I(x; t) and I(y; t) while multiplying the λ parameter every 500 iterations by 1.06 during training. One can observe an increase in the mutual information from approximately 6 in the DIB to approximately 11 in the copula DIB. At the same time, only two dimensions are used in the latent space t by the copula DIB. The version without copula does not provide competitive results despite using 10 out of 18 dimensions of the latent space t. In Appendix B, we extend this experiment to comparison of information curves for other pre-processing techniques as well as to subjecting the training data to monotonic transformations other than the beta transformation.
Experiment 2. Building on Experiment 1, we use the trained models for assessing their predictive quality on test data (Xtest, ytest) and (X̃test, ỹtest). We compute predictive scores of the latent space t with respect to the generated y in the form of mutual information I(t; y) for all values of the parameter λ. The resulting information curve shows an increased predictive capability of cDIB in Figure 2(b) and exhibits no difference to the information curve produced in Experiment 1. Thus, the increased mutual information reported in Experiment 1 cannot only be attributed to overfitting.
Experiment 3. In the third experiment, we qualitatively assess the reconstruction capability of cDIB compared to plain DIB (Figure 3). We choose the value of λ such that in both models two dimensions are active in the latent space. Figure 3(b) shows a detailed reconstruction of y. The reconstruction quality of plain DIB on test data results in a tight backbone which is not capable of reconstructing y (Figure 3(a)).
Experiment 4. We further inspect the information curves of DIB and cDIB by testing how the copula transformation adds resilience of the model against outliers and adversarial attacks in the training phase. To simulate an adversarial attack, we randomly choose 5% of all entries in the datasetsXtrain and X̃train and replace them with outliers by adding uniformly sampled noise within the range [1,5]. We again compute information curves for the training procedure and compare normal training with training with data subject to an attack for the copula and non-copula models. The results (Figure 4(a)) show that the copula model is more robust against outlier data than the plain one. We attribute this behaviour directly to the copula transformation, as ranks are less sensitive to outliers than raw data.
Experiment 5. In this experiment, we investigate how the copula transformation affects convergence of the neural networks making up the DIB. We focus on the encoder and track the values
of the loss function. Figure 4(b) shows a sample comparison of convergence of DIB and cDIB for λ = 100. One can see that the cDIB starts to converge around iteration no. 1000, whereas the plain DIB takes longer. This can be explained by the fact that in the copula model the marginals are normalised to the same range of normal quantiles by the copula transformation. This translates to higher convergence rates.
4.2 REAL-WORLD DATA
We continue analysing the impact of the copula transformation on the latent space of the DIB with a real-world dataset. We first report information curves analogous to Experiment 1 (Section 4.1) and proceed to inspect the latent spaces of both models along with sensitivity analysis with respect to λ.
Dataset and Set-up. We consider the unnormalised Communities and Crime dataset Lyons et al. (1998) from the UCI repository1. The dataset consisted of 125 predictive, 4 non-predictive and 18 target variables with 2215 samples in total. In a preprocessing step, we removed all missing values from the dataset. In the end, we used 1901 observations with 102 predictive and 18 target variables in our analysis.
1http://archive.ics.uci.edu/ml/datasets/communities+and+crime+unnormalized
We use a latent layer with 18 nodes that models the means of the 18-dimensional latent space t. Again, the variance of the latent and the output space is set to 1. The stochastic encoder as well as the stochastic decoder consist of a neural network with two fully-connected hidden layers with 100 nodes each. Softplus is employed as the activation function. The decoder uses a Gaussian likelihood. Our model is trained for 150000 iterations using mini batches with a size of 1255. As before, we use Adam (Kingma & Ba, 2014) with a learning rate of 0.0005.
Experiment 6. Analogously to Experiment 1 (Section 4.1), information curves stemming from the DIB and cDIB models have been computed. We record the values of I(x; t) and I(y; t) while multiplying the λ parameter every 500 iterations by 1.01 during training. Again, the information curve for the copula model yields larger values of mutual information, which we attribute to the increased flexibility of the model, as we pointed out in Section 3.3. In addition, the application of the copula transformation leads to a much lower number of used dimensions in the latent space. For example, copula DIB uses only four dimensions in the latent space for the highest λ values. DIB, on the other hand, needs eight dimensions in the latent space and nonetheless results in lower mutual information scores. In order to show that our information curves are significantly different, we perform a Kruskal-Wallis rank test (p-value of 1.6 ∗ 10−16).
Experiment 7. This experiment illustrates the difference in the disentanglement of the latent spaces of the DIB model with and without the copula transformation. We select two variables which yielded highest correlation with the target variable arsons and plot them along with their densities. In order to obtain the corresponding class labels (rainbow colours in Figure 6), we separate the values of arsons in eight equally-sized bins. A sample comparison of latent spaces of DIB and cDIB for λ = 21.55 is depicted in Figure 6. A more in-depth analysis of sensitivity of the learned latent space to λ is presented in Appendix A. The latent space t of DIB appears consistently less structured than that of cDIB, which is also reflected in the densities of the two plotted variables. In contrast, we can identify a much clearer structure in the latent space with respect to our previously calculated class labels.
5 CONCLUSION
We have presented a novel approach to compact representation learning of deep latent variable models. To this end, we showed that restoring invariance properties of the Deep Information Bottleneck with a copula transformation leads to disentanglement of the features in the latent space. Subsequently, we analysed how the copula transformation translates to sparsity in the latent space of the considered model. The proposed model allows for a simplified and fully non-parametric treatment of marginal distributions which has the advantage that it can be applied to distributions with arbitrary marginals. We evaluated our method on both artificial and real data. We showed that in practice the copula transformation leads to latent spaces that are disentangled, have an increased prediction capability and are resilient to adversarial attacks. All these properties are not sensitive to the only hyperparameter of the model, λ.
In Section 3.2, we motivated the copula transformation for the Deep Information Bottleneck with the lack of invariance properties present in the original Information Bottleneck model, making the copula augmentation particularly suited for the DIB. The relevance of the copula transformation, however, reaches beyond the variational autoencoder, as evidenced by e.g. resilience to adversarial attacks or the positive influence on convergence rates presented in Section 4. These advantages of our model that do not simply follow from restoring the Information Bottleneck properties to the DIB, but are additional benefits of the copula. The copula transformation thus promises to be a simple but powerful addition to the general deep learning toolbox.
ACKNOWLEDGMENTS
This work was partially supported by the Swiss National Science Foundation under grants CR32I2 159682 and 51MRP0 158328 (SystemsX.ch).
B EXTENSION OF EXPERIMENT 1
Building on Experiment 1 from Section 4, we again compare the information curves produced by the DIB and its copula augmentation. We compare the copula transformation with data normalisation (transformation to mean 0 and variance 1) in Figure 9(a). We also replace the beta transformation with gamma in the experimental set-up described in Section 4 and report the results in Figure 9(b). As in Experiment 1, one con see that the information curve for the copula version of DIB lies above the plain one. The latent space uses fewer dimensions as well. | 1. What are the key contributions and novel aspects introduced by the paper in deep variational bottleneck approaches?
2. What are the weaknesses of the paper regarding its experimental results and comparisons with other works?
3. How does the reviewer assess the clarity and quality of the paper's content?
4. Are there any minor comments or suggestions provided by the reviewer for improving the paper? | Review | Review
[====================================REVISION ======================================================]
Ok so the paper underwent major remodel, which significantly improved the clarity. I do agree now on Figure 5, which tips the scale for me to a weak accept.
[====================================END OF REVISION ================================================]
This paper explores the problems of existing Deep variational bottle neck approaches for compact representation learning. Namely, the authors adjust deep variational bottle neck to conform to invariance properties (by making latent variable space to depend on copula only) - they name this model a copula extension to dvib. They then go on to explore the sparsity of the latent space
My main issues with this paper are experiments: The proposed approach is tested only on 2 datasets (one synthetic, one real but tiny - 2K instances) and some of the plots (like Figure 5) are not convincing to me. On top of that, it is not clear how two methods compare computationally and how introduction of the copula affects the convergence (if it does)
Minor comments
Page 1: forcing an compact -> forcing a compact
“and and” =>and
Section 2: mention that I is mutual information, it is not obvious for everyone
Figure 3: circles/triangles are too small, hard to see
Figure 5: not really convincing. B does not appear much more structured than a, to me it looks like a simple transformation of a. |
ICLR | Title
Learning Sparse Latent Representations with the Deep Copula Information Bottleneck
Abstract
Deep latent variable models are powerful tools for representation learning. In this paper, we adopt the deep information bottleneck model, identify its shortcomings and propose a model that circumvents them. To this end, we apply a copula transformation which, by restoring the invariance properties of the information bottleneck method, leads to disentanglement of the features in the latent space. Building on that, we show how this transformation translates to sparsity of the latent space in the new model. We evaluate our method on artificial and real data.
1 INTRODUCTION
In recent years, deep latent variable models (Kingma & Welling, 2013; Rezende et al., 2014; Goodfellow et al., 2014) have become a popular toolbox in the machine learning community for a wide range of applications (Ledig et al., 2016; Reed et al., 2016; Isola et al., 2016). At the same time, the compact representation, sparsity and interpretability of the latent feature space have been identified as crucial elements of such models. In this context, multiple contributions have been made in the field of relevant feature extraction (Chalk et al., 2016; Alemi et al., 2016) and learning of disentangled representations of the latent space (Chen et al., 2016; Bouchacourt et al., 2017; Higgins et al., 2017).
In this paper, we consider latent space representation learning. We focus on disentangling features with the copula transformation and, building on that, on forcing a compact low-dimensional representation with a sparsity-inducing model formulation. To this end, we adopt the deep information bottleneck (DIB) model (Alemi et al., 2016) which combines the information bottleneck and variational autoencoder methods. The information bottleneck (IB) principle (Tishby et al., 2000) identifies relevant features with respect to a target variable. It takes two random vectors x and y and searches for a third random vector twhich, while compressing x, preserves information contained in y. A variational autoencoder (VAE) (Kingma & Welling, 2013; Rezende et al., 2014) is a generative model which learns a latent representation t of x by using the variational approach.
Although DIB produces good results in terms of image classification and adversarial attacks, it suffers from two major shortcomings. First, the IB solution only depends on the copula of x and y and is thus invariant to strictly monotone transformations of the marginal distributions. DIB does not preserve this invariance, which means that it is unnecessarily complex by also implicitly modelling the marginal distributions. We elaborate on the fundamental issues arising from this lack of invariance in Section 3. Second, the latent space of the IB is not sparse which results in the fact that a compact feature representation is not feasible.
Our contribution is two-fold: In the first step, we restore the invariance properties of the information bottleneck solution in the DIB. We achieve this by applying a transformation of x and y which makes the latent space only depend on the copula. This is a way to fully represent all the desirable features inherent to the IB formulation. The model is also simplified by ensuring robust and fully non-parametric treatment of the marginal distributions. In addition, the problems arising from the lack of invariance to monotone transformations of the marginals are solved. In the second step, once the invariance properties are restored, we exploit the sparse structure of the latent space of DIB. This is possible thanks to the copula transformation in conjunction with using the sparse parametrisation
∗These authors contributed equally.
of the information bottleneck, proposed by (Rey et al., 2014). It translates to a more compact latent space that results in a better interpretability of the model.
The remainder of this paper is structured as follows: In Section 2, we review publications on related models. Subsequently, in Section 3, we describe the proposed copula transformation and show how it fixes the shortcomings of DIB, as well as elaborate on the sparsity induced in the latent space. In Section 4, we present results of both synthetic and real data experiments. We conclude our paper in Section 5.
2 RELATED WORK
The IB principle was introduced by (Tishby et al., 2000). The idea is to compress the random vector x while retaining the information of the random vector y. This is achieved by solving the following variational problem: minp(t|x)I(x; t)−λI(t; y), with the assumption that y is conditionally independent of t given x, and where I stands for mutual information. In recent years, copula models were combined with the IB principle in (Rey & Roth, 2012) and extended to the sparse meta-Gaussian IB (Rey et al., 2014) to become invariant against strictly monotone transformations. Moreover, the IB method has been applied to the analysis of deep neural networks in (Tishby & Zaslavsky, 2015), by quantifying mutual information between the network layers and deriving an information theoretic limit on DNN efficiency.
The variational bound and reparametrisation trick for autoencoders were introduced in (Kingma & Welling, 2013; Rezende et al., 2014). The variational autoencoder aims to learn the posterior distribution of the latent space p(t|x) and the decoder p(x|t). The general idea of combining the two approaches is to identify the solution t of the information bottleneck with the latent space t of the variational autoencoder. Consequently, the terms I(x; t) and I(t; y) in the IB problem can be expressed in terms of the parametrised conditionals p(t|x), p(y|t). Variational lower bounds on the information bottleneck optimisation problem have been considered in (Chalk et al., 2016) and (Alemi et al., 2016). Both approaches, however, treat the differential entropy of the marginal distribution as a positive constant, which is not always justified (see Section 3). A related model is introduced in (Pereyra et al., 2017), where a penalty on the entropy of output distributions of neural networks is imposed. These approaches do not introduce the invariance against strictly monotone transformations and thus do not address the issues we identify in Section 3.
A sizeable amount of work on modelling the latent space of deep neural networks has been done. The authors of (Alvarez & Salzmann, 2016) propose the use of a group sparsity regulariser. Other techniques, e.g. in (Mozer & Smolensky, 1989) are based on removing neurons which have a limited impact on the output layer, but they frequently do not scale well with the overall network size. More recent approaches include training neural networks of smaller size to mimic a deep network (Hinton et al., 2015; Romero et al., 2014). In addition, multiple contributions have been proposed in the area of latent space disentanglement (Chen et al., 2016; Bouchacourt et al., 2017; Higgins et al., 2017; Denton & Birodkar, 2017). None of the approaches consider the influence of the copula on the modelled latent space.
Copula models have been proposed in the context of Bayesian variational methods in (Suh & Choi, 2016), (Tran et al., 2015) and (Han et al., 2016). The former approaches focus on treating the latent space variables as indicators of local approximations of the original space. None of the three approaches relate to the information bottleneck framework.
3 MODEL
3.1 FORMULATION
In order to specify our model, we start with a parametric formulation of the information bottleneck: max φ,θ −Iφ(t;x) + λIφ,θ(t; y), (1)
where I stands for mutual information with its parameters in the subscript. A parametric form of the conditionals pφ(t|x) and pθ(y|t) as well as the information bottleneck Markov chain t − x − y are assumed. A graphical illustration of the proposed model is depicted in Figure 1.
The two terms in Eq. (1) have the following forms:
Iφ(T ;X) = DKL (p(t|x)p(x)‖p(t)p(x)) = Ep(x)DKL (pφ(t|x)‖p(t)) , (2) and
Iφ,θ(T ;Y ) = DKL
([∫ p(t|y, x)p(y, x) dx ] ‖p(t)p(y) ) = Ep(x,y)Epφ(t|x) log pθ(y|t) + h(Y ),
(3)
because of the Markov assumption in the information bottleneck model pφ(t|x, y) = pφ(t|x). We denote with h(y) = −Ep(y)[log p(y)] the entropy for discrete y and the differential entropy for continuous y. We then assume a conditional independence copula and Gaussian margins:
pφ(t|x) = cT |X(u(t|x)|x) ∏ j pφj (tj |x) = ∏ j N(tj |µj(x), σ2j (x))
where tj is the jth marginal of t = (t1, . . . , td), ct|x is the copula density of t|x, u(t|x) := Ft|x(t|x) is the uniform density indexed by t|x, and the functions µj(x), σ2j (x) are implemented by deep networks. We make the same assumption about pθ(y|t).
3.2 MOTIVATION
As we stated in Section 1, the deep information bottleneck model derived in Section 3.1 is not invariant to strictly increasing transformations of the marginal distributions. The IB method is formulated in terms of mutual information I(x, y), which depends only on the copula and therefore does not depend on monotone transformations of the marginals: I(x, y) = MI(x, y) −MI(x) −MI(y), where MI(x), for x = (x1, . . . , xd), denotes the multi-information, which is equal to the negative copula entropy, as shown by Ma & Sun (2011):
MI(X) := DKL(p(x)‖ ∏ j pj(xj)) = ∫ cX(u(x)) log cX(u(x))du = −h(cX(u(x))). (4)
Issues with lack of invariance to marginal transformations.
1. On the encoder side (Eq. (2)), the optimisation is performed over the parametric conditional margins pφ(tj |x) in Iφ(t;x) = Ep(x)DKL (pφ(t|x)‖p(t)). When a monotone transformation xj → x̃j is applied, the required invariance property can only be guaranteed if the model for φ (in our case a deep network) is flexible enough to compensate for this transformation, which can be a severe problem in practice (see example in Section 4.1).
2. On the decoder side, assuming Gaussian margins in pθ(yj |t) might be inappropriate for modelling y if the domain of y is not equal to the real numbers, e.g. when y is defined only on a bounded
interval. If used in a generative way, the model might produce samples outside the domain of y. Even if other distributions than Gaussian are considered, such as truncated Gaussian, one still needs to make assumptions concerning the marginals. According to the IB formulation, such assumptions are unnecessary.
3. Also on the decoder side, we have: Iφ(t; y) = Ep(x,y)Epφ(t|x) log pθ(y|t) + h(y). The authors of Alemi et al. (2016) argue that since h(y) is constant, it can be ignored in computing Iφ(t; y). This is true for a fixed or for a discrete y, but not for the class of monotone transformations of y, which should be the case for a model specified with mutual informations only. Since the left hand side of this equation (Iφ(t; y)) is invariant against monotone transformations, and h(y) in general depends on monotone transformations, the first term on the right hand side (Ep(x,y)Epφ(t|x) log pθ(y|t)) cannot be invariant to monotone transformations. In fact, under such transformations, the differential entropy h(y) can take any value from −∞ to +∞, which can be seen easily by decomposing the entropy into the copula entropy and the sum of marginal entropies (here, j stands for the jth dimension):
h(y) = h(cy(u(y))) + ∑ j h(yj) = −MI(y) + ∑ j h(yj). (5)
The first term (i.e. the copula entropy which is equal to the negative multi-information, as in Eq. (4)) is a non-positive number. The marginal entropies h(yj) can take any value when using strictly increasing transformations (for instance, the marginal entropy of a uniform distribution on [a, b] is log(b − a)). As a consequence, the entropy term h(y) in Eq. (3) can be treated as a constant only either for one specific y or for discrete y, but not for all elements of the equivalence class containing all monotone transformations of y. Moreover, every such transformation would lead to different (I(x, t), I(y, t)) pairs in the information curve, which basically makes this curve arbitrary. Thus, h(y) being constant is a property that needs to be restored.
3.3 PROPOSED SOLUTION
The issues described in Section 3.2 can be fixed by using transformed variables (for a d dimensional x = (x1, . . . , xd), xj stands for the jth dimension):
x̃j = Φ −1(F̂ (xj)), xj = F̂ −1(Φ(x̃j)), (6)
where Φ is the Gaussian cdf and F̂ is the empirical cdf. The same transformation is applied to y. In the copula literature, these transformed variables are sometimes called normal scores. Note that the mapping is (approximately) invertible: xj = F̂−1(Φ(x̃j)), with F̂−1 being the empirical quantiles treated as a function (e.g. by linear interpolation). This transformation fixes the invariance problem on the encoding side (issue 1), as well as the problems on the decoding side: problem 2 disappeared because the transformed variables x̃j are standard normal distributed, and problem 3 disappeared because the decoder part (Eq. (3)) now has the form:
Ep(x̃,ỹ)Epφ(t|x̃) log pθ(ỹ|t) = Iφ(t; ỹ) +MI(ỹ)− ∑ j h(ỹj) = Iφ(t; ỹ)− h(cinv(u(ỹ))) (7)
where cinv(u(ỹ)) is indeed constant for all strictly increasing transformations applied to y.
Having solved the IB problem in the transformed space, we can go back to the original space by using the inverse transformation according to Eq. (6) xj = F̂−1(Φ(x̃j)). The resulting model is thus a variational autoencoder with x replaced by x̃ in the first term and y replaced by ỹ in the second term.
Technical details. We assume a simple prior p(t) = N (t; 0, I). Therefore, the KL divergence DKL (pφ(t|x̃)‖p(t)) is a divergence between two Gaussian distributions and admits an analytical form. We then estimate
I(t; x̃) = Ep(x̃)DKL (pφ(t|x̃)‖p(t)) ≈ 1
n ∑ i DKL (pφ(t|x̃i)‖p(t)) (8)
and all the gradients on (mini-)batches.
For the decoder side, Ep(x̃,ỹ)Epφ(t|x̃) log pθ(ỹ|t) is needed. We train our model using the backpropagation algorithm. However, this algorithm can only handle deterministic nodes. In order to overcome
this problem, we make use of the reparametrisation trick (Kingma & Welling, 2013; Rezende et al., 2014):
I(t; ỹ) = Ep(x̃,ỹ)E ∼N (0,I) ∑ j log pθ(ỹj |t = ~µj(x̃) + diag(σj(x̃)) ) + const., (9)
with ỹj = Φ−1(F̂ (yj)).
3.4 SPARSITY OF THE LATENT SPACE
In this section we explain how the sparsity constraint on the information bottleneck along with the copula transformation result in sparsity of the latent space t. We first introduce the Sparse Gaussian Information Bottleneck and subsequently show how augmenting it with the copula transformation leads to the sparse t.
Sparse Gaussian Information Bottleneck. Recall that the information bottleneck compresses x to a new variable t by minimising I(x; t)− λI(t; y). This ensures that some amount of information with respect to a second “relevance” variable y is preserved in the compression.
The assumption that x and y are jointly Gaussian-distributed leads to the Gaussian Information Bottleneck (Chechik et al., 2005) where the solution t can be proved to also be Gaussian distributed. In particular, if we denote the marginal distribution of x: x ∼ N (0,Σx), the optimal t is a noisy projection of x of the following form:
t = Ax+ ξ, ξ ∼ N (0, I) ⇒ t|x ∼ N (Ax, I), t ∼ N (0, AΣxA> + I). The mutual information between x and t is then equal to: I(x; t) = 12 log |AΣxA> + I|. In the sparse Gaussian Information Bottleneck, we additionally assume that A is diagonal, so that the compressed t is a sparse version of x. Intuitively, sparsity follows from the observation that for a pair of random variables x, x′, any full-rank projection Ax′ of x′ would lead to the same mutual information since I(x, x′) = I(x;Ax′), and a reduction in mutual information can only be achieved by a rank-deficient matrix A. For diagonal projections, this immediately implies sparsity of A.
Sparse latent space of the Deep Information Bottleneck. We now proceed to explain the sparsity induced in the latent space of the copula version of the DIB introduced in Section 3.3. We will assume a possibly general, abstract pre-transformation of x, fβ , which accounts for the encoder network along with the copula transformation of x. Then we will show how allowing for this abstract pre-transformation, in connection with the imposed sparsity constraint of the sparse information bottleneck described above, translates to the sparsity of the latent space of the copula DIB. By sparsity we understand the number of active neurons in the last layer of the encoder.
To this end, we use the Sparse Gaussian Information Bottleneck model described above. We analyse the encoder part of the DIB, described with I(x, t). Consider the general Gaussian Information Bottleneck (with x and y jointly Gaussian and a full matrixA) and the deterministic pre-transformation, fβ(x), performed on x. The pre-transformation is parametrised by a set of parameters β, which might be weights of neurons should fβ be implemented as a neural network. Denote by M a n× p matrix which contains n i.i.d. samples of Afβ(x), i.e. M = AZ with Z = (fβ(x1), . . . , fβ(xn))>. The optimisation of mutual information I(x, t) in min I(x; t)− λI(t; y) is then performed over M and β.
Given fβ and the above notation, the estimator of I(x; t) = 12 log |AΣxA> + I| becomes:
Î(x; t) = 1
2 log ∣∣∣∣ 1nMM> + I ∣∣∣∣ , (10)
which would further simplify to Î(x; t) = 12 ∑ i log(Dii + 1), if the pre-transformation fβ were indeed such that D := 1nMM > were diagonal. This is equivalent to the Sparse Gaussian Information Bottleneck model described above. Note that this means that the sparsity constraint in the Sparse Gaussian IB does not cause any loss of generality of the IB solution as long as the abstract
pre-transformation fβ makes it possible to diagonalise 1nMM > in Eq. (10). We can, however, approximate this case by forcing this diagonalisation in Eq. (10), i.e. by only considering the diagonal part of the matrix: I ′(x; t) = 12 log ∣∣diag( 1nMM> + I)∣∣ . We now explain why this approximation (replacing Î(x; t) with I ′(x; t)) is justified and how it leads to fβ finding a low-dimensional representation of the latent space. Note that for any positive definite matrix B, the determinant |B| is always upper bounded by ∏iBii = |diag(B)|, which is a consequence of Hadamard’s inequality. Thus, instead of minimising Î(x; t), we minimise an upper bound I ′(x; t) ≥ Î(x; t) in the Information Bottleneck cost function. Equality is obtained if the transformation fβ , which we assume to be part of an “end-to-end” optimisation procedure, indeed successfully diagonalised D = 1nMM
>. Note that equality in the Hadamard’s inequality is equivalent to D+ I being orthogonal, thus fβ is forced to find the “most orthogonal” representation of the inputs in the latent space. Using a highly flexible fβ (for instance, modelled by a deep neural network), we might approximate this situation reasonably well. This explains how the copula transformation translates to a low-dimensional representation of the latent space.
We indeed see disentanglement and sparse structure of the latent space learned by the copula DIB model by comparing it to the plain DIB without the copula transformation. We demonstrate it in Section 4.
4 EXPERIMENTS
We now proceed to experimentally verify the contributions of the copula Deep Information Bottleneck. The goal of the experiments is to test the impact of the copula transformation. To this end, we perform a series of pair-wise experiments, where DIB without and with (cDIB) the copula transformation are tested in the same set-up. We use two datasets (artificial and real-world) and devise multiple experimental set-ups.
4.1 ARTIFICIAL DATA
First, we construct an artificial dataset such that a high-dimensional latent space is needed for its reconstruction (the dataset is reconstructed when samples from the latent space spatially coincide with it in its high-dimensional space). We perform monotone transformations on this dataset and test the difference between DIB and cDIB on reconstruction capabilities as well as classification predictive score.
Dataset and set-up. The model used to generate the data consists of two input vectors x1 and x2 drawn form a uniform distribution defined on [0, 2] and vectors k1 and k2 drawn uniformly from [0, 1]. Additional inputs are xi=3...10 = ai∗k1+(1−ai)∗k2+0.3∗bi with ai, bi drawn from a uniform distribution defined on [0, 1]. All input vectors x1...10 form the input matrix X . Latent variables z1 = √ x21 + x 2 2 and z2 = z1 + x4 are defined and then normalised by dividing through their maximum value. Finally, random noise is added. Two target variables y1 = z2 ∗ cos(1.75 ∗ π ∗ z1) and y2 = z2 ∗ sin(1.75 ∗ π ∗ z1) are then calculated. y1 and y2 form a spiral if plotted in two dimensions. The angle and the radius of the spiral are highly correlated, which leads to the fact that a one-dimensional latent space can only reconstruct the backbone of the spiral. In order to reconstruct the details of the radial function, one has to use a latent space of at least two dimensions. We generate 200k samples from X and y. X is further transformed to beta densities using strictly increasing transformations. We split the samples into test (20k samples) and training (180k samples) sets. The generated samples are then transformed with the copula transformation (Eq. (6)) to X̃ and ỹ and split in the same way into test and training sets. This gives us the four input setsXtrain,Xtest, X̃train, X̃test and the four target sets ytrain, ytest, ỹtrain, ỹtest.
We use a latent layer with ten nodes that model the means of the ten-dimensional latent space t. The variance of the latent space is set to 1 for simplicity. The encoder as well as the decoder consist of a neural network with two fully-connected hidden layers with 50 nodes each. We use the softplus function as the activation function. Our model is trained using mini batches (size = 500) with the Adam optimiser (Kingma & Ba, 2014) for 70000 iterations using a learning rate of 0.0006.
Experiment 1. In the first experiment, we compare the information curves produced by the DIB and its copula augmentation (Figure 2(a)). To this end, we use the sets (Xtrain, ytrain) and (X̃train, ỹtrain) and record the values of I(x; t) and I(y; t) while multiplying the λ parameter every 500 iterations by 1.06 during training. One can observe an increase in the mutual information from approximately 6 in the DIB to approximately 11 in the copula DIB. At the same time, only two dimensions are used in the latent space t by the copula DIB. The version without copula does not provide competitive results despite using 10 out of 18 dimensions of the latent space t. In Appendix B, we extend this experiment to comparison of information curves for other pre-processing techniques as well as to subjecting the training data to monotonic transformations other than the beta transformation.
Experiment 2. Building on Experiment 1, we use the trained models for assessing their predictive quality on test data (Xtest, ytest) and (X̃test, ỹtest). We compute predictive scores of the latent space t with respect to the generated y in the form of mutual information I(t; y) for all values of the parameter λ. The resulting information curve shows an increased predictive capability of cDIB in Figure 2(b) and exhibits no difference to the information curve produced in Experiment 1. Thus, the increased mutual information reported in Experiment 1 cannot only be attributed to overfitting.
Experiment 3. In the third experiment, we qualitatively assess the reconstruction capability of cDIB compared to plain DIB (Figure 3). We choose the value of λ such that in both models two dimensions are active in the latent space. Figure 3(b) shows a detailed reconstruction of y. The reconstruction quality of plain DIB on test data results in a tight backbone which is not capable of reconstructing y (Figure 3(a)).
Experiment 4. We further inspect the information curves of DIB and cDIB by testing how the copula transformation adds resilience of the model against outliers and adversarial attacks in the training phase. To simulate an adversarial attack, we randomly choose 5% of all entries in the datasetsXtrain and X̃train and replace them with outliers by adding uniformly sampled noise within the range [1,5]. We again compute information curves for the training procedure and compare normal training with training with data subject to an attack for the copula and non-copula models. The results (Figure 4(a)) show that the copula model is more robust against outlier data than the plain one. We attribute this behaviour directly to the copula transformation, as ranks are less sensitive to outliers than raw data.
Experiment 5. In this experiment, we investigate how the copula transformation affects convergence of the neural networks making up the DIB. We focus on the encoder and track the values
of the loss function. Figure 4(b) shows a sample comparison of convergence of DIB and cDIB for λ = 100. One can see that the cDIB starts to converge around iteration no. 1000, whereas the plain DIB takes longer. This can be explained by the fact that in the copula model the marginals are normalised to the same range of normal quantiles by the copula transformation. This translates to higher convergence rates.
4.2 REAL-WORLD DATA
We continue analysing the impact of the copula transformation on the latent space of the DIB with a real-world dataset. We first report information curves analogous to Experiment 1 (Section 4.1) and proceed to inspect the latent spaces of both models along with sensitivity analysis with respect to λ.
Dataset and Set-up. We consider the unnormalised Communities and Crime dataset Lyons et al. (1998) from the UCI repository1. The dataset consisted of 125 predictive, 4 non-predictive and 18 target variables with 2215 samples in total. In a preprocessing step, we removed all missing values from the dataset. In the end, we used 1901 observations with 102 predictive and 18 target variables in our analysis.
1http://archive.ics.uci.edu/ml/datasets/communities+and+crime+unnormalized
We use a latent layer with 18 nodes that models the means of the 18-dimensional latent space t. Again, the variance of the latent and the output space is set to 1. The stochastic encoder as well as the stochastic decoder consist of a neural network with two fully-connected hidden layers with 100 nodes each. Softplus is employed as the activation function. The decoder uses a Gaussian likelihood. Our model is trained for 150000 iterations using mini batches with a size of 1255. As before, we use Adam (Kingma & Ba, 2014) with a learning rate of 0.0005.
Experiment 6. Analogously to Experiment 1 (Section 4.1), information curves stemming from the DIB and cDIB models have been computed. We record the values of I(x; t) and I(y; t) while multiplying the λ parameter every 500 iterations by 1.01 during training. Again, the information curve for the copula model yields larger values of mutual information, which we attribute to the increased flexibility of the model, as we pointed out in Section 3.3. In addition, the application of the copula transformation leads to a much lower number of used dimensions in the latent space. For example, copula DIB uses only four dimensions in the latent space for the highest λ values. DIB, on the other hand, needs eight dimensions in the latent space and nonetheless results in lower mutual information scores. In order to show that our information curves are significantly different, we perform a Kruskal-Wallis rank test (p-value of 1.6 ∗ 10−16).
Experiment 7. This experiment illustrates the difference in the disentanglement of the latent spaces of the DIB model with and without the copula transformation. We select two variables which yielded highest correlation with the target variable arsons and plot them along with their densities. In order to obtain the corresponding class labels (rainbow colours in Figure 6), we separate the values of arsons in eight equally-sized bins. A sample comparison of latent spaces of DIB and cDIB for λ = 21.55 is depicted in Figure 6. A more in-depth analysis of sensitivity of the learned latent space to λ is presented in Appendix A. The latent space t of DIB appears consistently less structured than that of cDIB, which is also reflected in the densities of the two plotted variables. In contrast, we can identify a much clearer structure in the latent space with respect to our previously calculated class labels.
5 CONCLUSION
We have presented a novel approach to compact representation learning of deep latent variable models. To this end, we showed that restoring invariance properties of the Deep Information Bottleneck with a copula transformation leads to disentanglement of the features in the latent space. Subsequently, we analysed how the copula transformation translates to sparsity in the latent space of the considered model. The proposed model allows for a simplified and fully non-parametric treatment of marginal distributions which has the advantage that it can be applied to distributions with arbitrary marginals. We evaluated our method on both artificial and real data. We showed that in practice the copula transformation leads to latent spaces that are disentangled, have an increased prediction capability and are resilient to adversarial attacks. All these properties are not sensitive to the only hyperparameter of the model, λ.
In Section 3.2, we motivated the copula transformation for the Deep Information Bottleneck with the lack of invariance properties present in the original Information Bottleneck model, making the copula augmentation particularly suited for the DIB. The relevance of the copula transformation, however, reaches beyond the variational autoencoder, as evidenced by e.g. resilience to adversarial attacks or the positive influence on convergence rates presented in Section 4. These advantages of our model that do not simply follow from restoring the Information Bottleneck properties to the DIB, but are additional benefits of the copula. The copula transformation thus promises to be a simple but powerful addition to the general deep learning toolbox.
ACKNOWLEDGMENTS
This work was partially supported by the Swiss National Science Foundation under grants CR32I2 159682 and 51MRP0 158328 (SystemsX.ch).
B EXTENSION OF EXPERIMENT 1
Building on Experiment 1 from Section 4, we again compare the information curves produced by the DIB and its copula augmentation. We compare the copula transformation with data normalisation (transformation to mean 0 and variance 1) in Figure 9(a). We also replace the beta transformation with gamma in the experimental set-up described in Section 4 and report the results in Figure 9(b). As in Experiment 1, one con see that the information curve for the copula version of DIB lies above the plain one. The latent space uses fewer dimensions as well. | 1. What is the focus of the paper regarding deep variational information bottleneck models?
2. What are the strengths and weaknesses of the proposed copula-based modification?
3. Do you have any concerns or suggestions regarding the experimental results and their interpretation?
4. How does the reviewer assess the clarity and quality of the paper's content?
5. What is the originality and significance of the paper's contribution in the context of related works? | Review | Review
The paper proposed a copula-based modification to an existing deep variational information bottleneck model, such that the marginals of the variables of interest (x, y) are decoupled from the DVIB latent variable model, allowing the latent space to be more compact when compared to the non-modified version. The experiments verified the relative compactness of the latent space, and also qualitatively shows that the learned latent features are more 'disentangled'. However, I wonder how sensitive are the learned latent features to the hyper-parameters and optimizations?
Quality: Ok. The claims appear to be sufficiently verified in the experiments. However, it would have been great to have an experiment that actually makes use of the learned features to make predictions. I struggle a little to see the relevance of the proposed method without a good motivating example.
Clarity: Below average. Section 3 is a little hard to understand. Is q(t|x) in Fig 1 a typo? How about t_j in equation (5)? There is a reference that appeared twice in the bibliography (1st and 2nd).
Originality and Significance: Average. The paper (if I understood it correctly) appears to be mainly about borrowing the key ideas from Rey et. al. 2014 and applying it to the existing DVIB model. |
ICLR | Title
Randomized Entity-wise Factorization for Multi-Agent Reinforcement Learning
Abstract
Real world multi-agent tasks often involve varying types and quantities of agents and non-agent entities; however, agents within these tasks rarely need to consider all others at all times in order to act effectively. Factored value function approaches have historically leveraged such independences to improve learning efficiency, but these approaches typically rely on domain knowledge to select fixed subsets of state features to include in each factor. We propose to utilize value function factoring with random subsets of entities in each factor as an auxiliary objective in order to disentangle value predictions from irrelevant entities. This factoring approach is instantiated through a simple attention mechanism masking procedure. We hypothesize that such an approach helps agents learn more effectively in multi-agent settings by discovering common trajectories across episodes within sub-groups of agents/entities. Our approach, Randomized Entity-wise Factorization for Imagined Learning (REFIL), outperforms all strong baselines by a significant margin in challenging StarCraft micromanagement tasks.
N/A
1 INTRODUCTION
Many real-world multi-agent tasks contain scenarios in which an agent must deal with varying numbers and/or types of cooperative agents, antagonist enemies or other entities. Agents, however, can often select their optimal actions while ignoring a subset of agents/entities. For example, in the sport of soccer, a “breakaway” occurs when an attacker with the ball passes the defense and only needs to beat the goalkeeper in order to score (see Figure 1). In this situation, only the opposing goalkeeper is immediately relevant to the attacker’s success, so the attacker can safely ignore players other than the goalkeeper for the time being. By ignoring irrelevant context, the attacker can generalize this experience better to its next breakaway. Furthermore, soccer takes many forms, from casual 5 vs. 5 to full scale 11 vs. 11 matches, and breakaways occur in all. If agents can identify independent patterns of
behavior such as breakaways, they should be able to learn more efficiently as well as share their experiences across all forms of soccer.
Value function factoring approaches attempt to leverage independences between agents, such as those in our soccer example, by learning value functions as a combination of independent factors that depend on disjunct subsets of the state and action spaces (Koller & Parr, 1999). These subsets are typically fixed in advance using domain knowledge about the problem at hand, and thus are not scalable to complex domains where dependencies are unknown and may shift over time. Recent approaches in cooperative deep multi-agent reinforcement learning (MARL) factor value functions into separate components for each agent’s action and observation space in order to enable decentralized execution (e.g., VDN (Sunehag et al., 2018), QMIX (Rashid et al., 2018)). These approaches learn a utility function for each agent that only depends on the agent’s own action and its observations. The global Q-value is then predicted as some monotonic combination of these utilities in order to allow agents to greedily select their actions with local information while maximizing the
global Q. These approaches are able to effectively leverage independence between agents’ local actions and observations, however, we note that observable entities are provided by the environment and are not all necessarily relevant to an agent’s value function.
We build on these recent approaches by additionally factoring the observation space of each agent into factors for sub-groups of observed entities. Unlike classic works which factor the state or observation spaces, our work does not depend on fixed subsets of features designated through domain knowledge. Instead, we propose to randomly select sub-groups of observed entities and “imagine” the predicted utilities within these groups for each agent. These terms will not account for potential interactions outside of the groups, so we include additional factors that estimate the effect of the entities outside of each sub-group on each agent’s utility. In order to estimate the true returns, we combine all factors using a mixing network (as in QMIX, Rashid et al., 2018), which allows our model to weight factors based on the full state context. We hypothesize this approach is beneficial for two reasons: 1) randomly partitioning entities and predicting returns from disjunct factors allows our model to explore all possible independence relationships among agents and entities, teaching agents to ignore irrelevant context when possible and 2) by teaching our models when they can ignore irrelevant context, they will learn more efficiently across varied settings that share common patterns of behavior, such as breakaways in soccer. The loss for training randomized factorization is added to the QMIX loss (i.e., using full observations) as an auxiliary objective. Our reasoning is again twofold: 1) we must learn the true returns to use as a target prediction for a Q-learning loss. 2) we do not know a priori which entities are unnecessary and thus need to learn policies that act on full observations.
Our entity-wise factoring procedure can be implemented easily in practice by using a simple masking procedure in attention-based models. Furthermore, by leveraging attention models, we can apply our approach to domains with varying entity quantities. Just as a soccer agent experiencing a breakaway can generalize their behavior across settings (5 vs. 5, 11 vs. 11, etc.) if they ignore irrelevant context, we hypothesize that our approach will improve performance across settings with variable agent and entity configurations. We propose Randomized Entity-wise Factorization for Imagined Learning (REFIL) and test on complex StarCraft Multi-Agent Challenge (SMAC) (Samvelyan et al., 2019) tasks with varying agent types and quantities, finding it attains improved performance over state-of-the-art methods.
2 BACKGROUND AND PRELIMINARIES
In this work, we consider the decentralized partially observable Markov decision process (DecPOMDP) (Oliehoek et al., 2016), which describes fully cooperative multi-agent tasks. Specifically, we utilize the setting of Dec-POMDPs with entities (Schroeder de Witt et al., 2019).
Dec-POMDPs with Entities are described as tuples: (S,U,O,P , r , E ,A,Φ, µ). E is the set of entities in the environment. Each entity e has a state representation se, and the global state is the set s = {se|e ∈ E} ∈ S. Some entities can be agents a ∈ A ⊆ E . Non-agent entities are parts of the environment that are not controlled by learning policies (e.g., landmarks, obstacles, agents with fixed behavior). The state features of each entity comprise of two parts: se = [fe, φe] where fe represents the description of an entity’s current state (e.g., position, orientation, velocity, etc.) while φe ∈ Φ represents the entity’s type (e.g., outfield player, goalkeeper, etc.), of which there are a discrete set. An entity’s type affects the state dynamics as well as the reward function and, importantly, it remains fixed for the duration of the entity’s existence. Not all entities may be visible to each agent, so we define a binary observability mask: µ(sa, se) ∈ {1, 0}, where agents can always observe themselves µ(sa, sa) = 1,∀a ∈ A. Thus, an agent’s observation is defined as oa = {se|µ(sa, se) = 1, e ∈ E} ∈ O. Each agent a can execute actions ua, and the joint action of all agents is denoted as u = {ua|a ∈ A} ∈ U. P is the state transition function which defines the probability P(s′|s,u). r(s,u) is the reward function which maps the global state and joint actions to a single scalar reward.
We do not consider entities being added during an episode, but they may become inactive (e.g., a unit dying in StarCraft) in which case they no longer affect transitions and rewards. Since s and u are sets, their ordering does not matter, and our modeling construct should account for this (e.g., by modeling with permutation invariance/equivariance (Lee et al., 2019)). In many domains, the set of entity types present {φe|e ∈ E} is fixed across episodes. We are particularly interested in cases where quantity and types of entities are varied between episodes, as identifying independence relationships between entities is crucial to generalizing experience effectively in these cases.
Learning for Dec-POMDPs We aim to learn a set of policies that maximize expected discounted reward (returns) in some MDP.Q-learning is specifically concerned with learning an accurate actionvalue function Qtot (defined below), and using this function to select the actions that maximize expected returns. The optimal Q-function for the Dec-POMDP setting is defined as:
Qtot(s,u) := E [ ∞∑ t=0 γt r(st,ut) ∣∣∣ s0=s, u0=u, st+1∼P (·|st,ut) ut+1=arg maxQ tot(st+1,·) ] = r(s,u) + γ E [ maxQtot(s′, ·) | s′∼P (·|s,u) ] . (1)
Partial observability is typically handled by using the history of actions and observations as a proxy for state, typically processed by a recurrent neural network (RNN, Hausknecht & Stone, 2015): Qtotθ (τt,ut) ≈ Qtot(st,ut), where the trajectory (i.e., action observation history) is τat := (oa0 , u a 0 , . . . , o a t ) and τt := {τat }a∈A.
Work in deep reinforcement learning (Mnih et al., 2015) has popularized the use of neural networks as function approximators for learning Q-functions that are trained by minimizing the loss function:
L(θ) := E [( rt + γQ tot θ̄ ( τt+1, arg maxQ tot θ (τt+1, ·) )︸ ︷︷ ︸ ytott −Qtotθ (τt,ut) )2∣∣∣ (τt,ut, rt, τt+1) ∼ D] , (2) where θ̄ are the parameters of a target network that is copied from θ periodically to improve stability (Mnih et al., 2015) and D is a replay buffer (Lin, 1992) that stores transitions collected by an exploratory policy (typically -greedy). Double deep Q-learning (van Hasselt et al., 2016) mitigates overestimation of the learned values by using actions that maximize Qtotθ for the target network Q tot θ̄ .
Value Function Factorization Centralized training for decentralized execution (CTDE) has been a major focus in recent efforts in deep multi-agent RL (Lowe et al., 2017; Foerster et al., 2018; Sunehag et al., 2018; Rashid et al., 2018; Iqbal & Sha, 2019). Some work achieves CTDE by introducing methods for factoring Q-functions into monotonic combinations of per-agent utilities, with each depending only on a single agent’s history of actions and observations Qa(τa, ua). This factorization allows agents to independently maximize their local utility functions in a decentralized manner with their selected actions combining to form the optimal joint action. This factored representation can only represent a limited subset of all possible value functions (Böhmer et al., 2020); however, these methods tend to perform better empirically than those that learn unfactored joint action value functions, most likely because they exploit independence properties among agents (Oliehoek et al., 2008). Sunehag et al. (2018) introduce value decomposition networks (VDN) which decompose the total Q-value as a sum of per-agent utilities: Qtot(τ ,u) := ∑ aQ
a(τa, ua). QMIX (Rashid et al., 2018) extends this approach to use a more expressive factorization. We describe QMIX and how we build our randomized factorization approach on top of it in Section 3.1.
Attention Mechanisms for MARL Attention models have recently generated intense interest due to their ability to incorporate information across large contexts, including in the MARL literature (Jiang & Lu, 2018; Iqbal & Sha, 2019; Long et al., 2020). Importantly for our purposes, they are able to process variable sized sets of fixed length vectors (in our case entities). At the core of these models is a parameterized transformation known as multi-head attention (Vaswani et al., 2017). This transformation allows entities to selectively extract information from other entities based on their local context.
We define X as a matrix where each row corresponds to an entity (either its state representation or a transformed representation of it). The global state s can be represented in matrix form as XE where Xe,∗ = se. Our models consist of entity-wise feedforward layers (denoted as eFF(X)) and multi-head attention layers (denoted as MHA (A,X,M)). Entity-wise feedforward layers apply an identical linear transformation to all input entities. Multi-head attention layers serve as a mechanism to integrate information across entities. These take in three arguments: the set of agents for which to compute an output vector A, the matrix X ∈ R|E|×d where d is the dimensionality of the input representations, and a maskM ∈ R|A|×|E|. The layer outputs a matrixH ∈ R|A|×h where h is the hidden dimension of the layer. The rowHa,∗ corresponds to a weighted sum of linearly transformed representations from all entities selected by agent a. Importantly, if the entry of the maskMa,e = 0, then entity e’s representation cannot be included in Ha,∗. Masking serves two important purposes for us: 1) It enables decentralized execution by providing the mask Mµa,e = µ(s
a, se), such that agents can only see entities observable by them in the environment, and 2) It enable us to “imagine” the returns among sub-groups of entities. We integrate entity-wise feedforward layers and multi-
head attention into QMIX in order to adapt it to settings where the number of agents and entities is variable and build our approach from there. The exact process of computing attention layers, as well as the specifics of our attention-augmented version of QMIX are described in detail in the Appendix.
3 RANDOMIZED ENTITY-WISE FACTORIZATION FOR IMAGINED LEARNING We now introduce our method, Randomized Entity-wise Factorization for Imagined Learning (REFIL). As discussed in Section 2, value function factorization approaches for cooperative deep MARL are motivated by their ability to exploit independence between agents while enabling decentralized execution with centralized training. We note that an agent’s choice of optimal actions is often independent of a subset of its observed entities (cf. soccer breakaway example from Section 1), in addition to the choice of other agents’ actions. Furthermore, we conjecture that agents robust to irrelevant entities should be more effective in dynamic environments with variable numbers of agents, as they are better able to identify shared patterns of behavior (e.g., breakaways exist in all forms of soccer). We do not know a priori which entities an agent can disregard, so we must consider all possible sub-groups of entities. As such, we propose to factor value functions by imagining returns in random sub-groups.
3.1 METHOD
QMIX (Rashid et al., 2018) relaxes the representational constraints of VDN (Sunehag et al., 2018), by allowing the joint value function Qtot to be a non-linear monotonic function with respect to the agent-specific utilitiesQa: Qtot = g ( Q1(τ1, u1; θQ), . . . , Q |A|(τ |A|, u|A|; θQ); θg ) . The parameters of the mixing function θg are generated by a hyper-network (Ha et al., 2017) conditioning on the global state s: θg = h(s; θh). Every state can therefore have a different mixing function, but the mixing’s monotonicity maintains decentralizability, as agents can maximize Qtot without communication. All parameters θ = {θQ, θh} are trained with the DQN loss of Equation 2. We extend QMIX with attention layers both to encode variable sized sets of entities observed by each per-agent utility Qa and to mix the utilities of all agents a ∈ A. Partial observability is implemented by a mask Mµae = µ(s
a, se),∀a ∈ A,∀e ∈ E that is provided to attention layers as described in section 2. Building on QMIX, for each agent we generate a separate utility that only observes the state features of agents within its randomly selected sub-group: QaI (τ a I , u
a), as well as a term that accounts for interactions outside of its group: QaO(τ a O, u
a), then mixing these 2n (2 for each agent) utilities to form Qtot. Importantly, since the mixing network is generated by the full state context, our model can weight factors contextually. For example, if agent a’s sampled sub-group contains all relevant information to compute its utility such that QaI ≈ Qa, then the mixing network can weight QaI more heavily than QaO. Otherwise, the network learns to balance Q a I and Q a O for each agent, using the full state as context, in order to estimateQtot. We train with these random factorizations in addition to the original QMIX objective. Treating factorization as auxiliary task, rather than as a representational constraint, allows our model to retain the expressivity of QMIX value functions (without sub-group partitions) while exploiting the potential independence between agents and other entities. We note that our auxiliary objective is only used in training, and execution in the environment does not use random factorization.
3.2 IMPLEMENTATION
The mechanism behind our entity-wise factorization relies on a simple attention masking procedure. In order to compute in-group utilities QaI (τ a I , u a) and out-group utilities QaO(τ a O, u
a), we first randomly partition all entities in E into two disjunct groups (held fixed for an episode), indicated by a random binary1 vector m ∈ {0, 1}|E|. The entry me determines whether entity e is in the first group, and we can take the negation ¬me to represent whether e is in the second group. The subset of all agents is denoted as mA := [ma]a∈A. From these vectors, we can construct attention masks M ∈ R|A|×|E|. For example, using the mask M1 = mAm>, would prevent agents in the first group from “seeing” outside their group sinceM1a,e = 1 only if agent a and entity e are in the same group. This can be added to a similarly produced mask M2 = ¬mA ¬m> to create MI , a mask that only allows all agents to see the entities within their distinct groups. We construct masks for agents to see within (MI ) and out of (MO) their groups, then combine with observability masks Mµ as such: MµI := M
µ∧MI ,MµO := Mµ∧MO ,withMI := mAm>∨¬mA¬m> ,MO := ¬MI . (3) 1 We first draw p ∈ (0, 1) uniformly, followed by |E| independent draws from a Bernoulli(p) distribution.
The entryMµI [a, e] determines both whether agent a can see entity e and whether entity e is in agent a’s group; the entry MµO[a, e] is the same but for entities out of a’s group. We can use these masks in our attention mechanisms to compute QaI (τ a I , u
a), which represents the predicted utility of agent a within its group and QaO(τ a O, u
a), a residual term that accounts for the utility of interactions that a would have with the other group.
Given each agent’s predicted utility factors for both in-group and out-of-group, we combine these into a Qtot such that we can use the target from the full scenario (ytott in (2)) using a mixing network as in QMIX. This network’s first layer typically takes n inputs, one for each agent. Since we have 2n factors, we simply concatenate two generated versions of the input layer (using MI and MO). We then apply the network to the concatenated utilities QaI (τ a I , u a) and QaO(τ a O, u
a) of all agents a, to compute the predicted value Qtotaux. This procedure is visualized in Figure 2 and described in more detail in the Appendix.
Our novel approach REFIL uses Qtotaux in place of Qtot in the DQN loss of (2) to get the auxiliary loss Laux. Our total loss combines both real and auxiliary losses: L := (1−λ)LQ +λLaux, where λ is a hyper-parameter. In practice, this procedure requires two additional passes through the network (with MµO and M µ I as masks instead of M
µ) per training step. These additional passes can be parallelized by computing all necessary quantities in one batch on GPU. It is feasible to split entities into an arbitrary number i of random sub-groups without using more computation by sampling several disjunct vectors mi and combining them them in the same way as we combine m and ¬m in Equation 3 to form MI and MO. Doing so could potentially bias agents towards considering smaller subsets of entities.
4 EXPERIMENTAL RESULTS
In our experiments, we aim to justify the main components of REFIL: 1) randomized sub-group factorization and 2) training as an auxiliary objective. We begin with experiments in a simple domain we construct such that agents’ decisions rely only on a subset of all entities, and that subset is known, so we can compare our approach to approaches that use this domain knowledge. Then, we move
on to testing on complex StarCraft micromanagement tasks to demonstrate our method’s ability to scale to complex domains.
4.1 GROUP MATCHING GAME
We construct a group matching game, pictured in Figure 3a, where each agent only needs to consider a subset of other agents to act effectively and we know that subset as ground-truth (unlike in more complex domains such as StarCraft). As such, the task can be described as follows: Agents (of which there are na) are randomly placed in one of nc cells and assigned to one of ng groups (represented by the different colors) at the start of each episode. They can choose from three actions: move clockwise, stay, and move counter-clockwise. Their ultimate goal is to be located in the same cell as the rest of their group members, at which point an episode ends. There is no restriction on which cell agents form a group in (e.g., both groups can form in the same cell). All agents share a reward of 2.5 when any group is completed (and an equivalent penalty for a formed group breaking) as well as a penalty of -0.1 for each time step in order to encourage agents to solve the task as quickly as possible. Agents’ entity-state descriptions se include the cell that the agent is currently occupying as well as the group it belongs to (both one-hot encoded), and the task is fully-observable. Notably, agents can act optimally while ignoring agents outside of their group.
Ground-truth knowledge of relevant entities enables us to disentangle two aspects of our approach: the use of entity-wise factorization in general and specifically using randomly selected factors. We construct two approaches that use this knowledge to build factoring masks MI and MO which are used in place of randomly sampled groups (otherwise the methods are identical to REFIL). REFIL (Fixed Oracle) directly uses the ground truth group assignments (different at each episode) to build masks. REFIL (Randomized Oracle) randomly samples sub-groups from the ground truth groups only, rather than from all possible entities. We additionally train REFIL and QMIX (Attention) (i.e., REFIL with no auxiliary loss).
Figure 3b shows that using domain knowledge alone does not significantly improve performance in this domain (QMIX (Attention) vs. REFIL (Fixed Oracle)). In fact our randomized factorization approach is able to surpass the use of domain knowledge. The randomization in REFIL appears therefore to be crucial. One hypothesis for this phenomenon is that randomization of sub-group factors enables better knowledge sharing across diverse settings (in this case unique group assignments). For example, the situation of two agents from the same group being located in adjacent cells occurs within all possible group assignments. If sampling randomly, our approach will occasionally sample these two agents alone in their own group. Even if the rest of the context in a given episode has never been seen by the model before, as long as this sub-scenario has been seen, the model has some indication of the value associated with each action. Even when restricting the set of entities to form sub-groups with to those that we know can be relevant to each agent (REFIL (Randomized Oracle)) we find that performance does not significantly improve. These results suggest that randomized sub-group formation for REFIL is a viable strategy (vs attempting to learn which entities are relevant and selecting sub-groups from there), and the main benefit of our approach is to promote generalization across scenarios by breaking value function predictions into reusable components.
4.2 STARCRAFT
We next test on the StarCraft multi-agent challenge (SMAC) (Samvelyan et al., 2019). The tasks in SMAC involve micromanagement of units in order to defeat a set of enemy units in battle. Specifically, we extend SMAC to settings with variable types and quantities of agents. We hypothesize that our approach is especially beneficial in this setting, as it should encourage of models to identify independence between entities and generalize to more diverse settings as a result. The dynamic setting requires some small modifications to SMAC, though we aim to change the environment as little as possible to maintain the challenging nature of the tasks. In the standard version of SMAC, both state and action spaces depend on a fixed number of agents and enemies, so our modifications, discussed in detail in the appendix, alleviate these problems.
In our tests we evaluate on three settings we call 3-8sz, 3-8csz, and 3-8MMM. 3-8sz pits symmetrical teams of between 3 and 8 agents against each other where the agents are a combination of Zealots and Stalkers (similar to the 2s3z and 3s5z tasks in the original SMAC). 3-8csz pits symmetrical teams of between 0 and 2 Colossi and 3 to 6 Stalkers/Zealots against each other (similar to 1c3s5z). 3-8MMM pits symmetrical teams of between 0 and 2 Medics and 3 to 6 Marines/Marauders against each other (similar to MMM and MMM2). As a sanity check, we additionally modify our approach to work with non-attention models such that we can test on the original SMAC tasks against existing methods. These results (located in the appendix) show that we can significantly improve on QMIX (previously state-of-the-art) in 2 of 3 settings tested.
Ablations and Baselines We introduce several ablations of our method, as well as adaptations of existing methods to handle variable sized inputs. These comparisons are summarized in Table 1. QMIX (Attention) is our method without the auxiliary loss. REFIL (VDN) is our approach using summation to combine all factors (a la Value Decomposition Networks (Sunehag et al., 2018)) rather than a non-linear monotonic mixing network. VDN (Attention) does not include the auxiliary loss and uses summation as factor mixing. QMIX (Mean Pooling) is QMIX (Attention) with attention layers replaced by mean pooling. We also test max
pooling but find the performance to be marginally worse than mean pooling. Importantly, for pooling layers we add entity-wise linear transformations prior to the pooling operations such that the total number of parameters is comparable to attention layers.
For baselines we consider some follow-up works to QMIX that attempt to improve the mixing network to be more expressive: QTRAN (Son et al., 2019) and Qatten (Yang et al., 2020). We additionally consider an alternative mechanism for aggregating information across variable sets of entities, known as Entity Message Passing (EMP) (Agarwal et al., 2019). We specifically use the restricted communication setting where agents can only communicate with agents they observe, and we set the number of message passing steps to 3. Finally, we compare to a method that builds on QMIX by attempting to learn dynamic roles that depend on the context each agent observes: ROMA (Wang et al., 2020a). For all approaches designed for the standard SMAC setting, we extend them with the same multi-head attention architecture that our approach uses.
Results and Discussion Our results on challenges in dynamic STARCRAFT settings can be found in Figure 4. We find that REFIL outperforms all ablations consistently in these settings. REFIL (VDN) performs much worse than our approach and VDN (Attention), highlighting the importance of the mixing network to handle contextual dependencies between entity partitions. Since the trajectory of a subset of entities can play out differently based on the surrounding context, it’s important for our factorization approach to recognize and adjust for these situations. The mixing network handles these dependencies by a) incorporating global state information into the mixing procedure, and b) mixing utilities in a non-linear monotonic fashion, rather than summing as in VDN. As such, the increased representative capacity of the QMIX mixing network, relative to VDN, is crucial. The use of mean-pooling in place of attention also performs poorly, indicating that attention is valuable for aggregating information from variable length sets of entities.
With respect to the baselines, we also find that REFIL consistently outperforms other methods, highlighting the unique challenge of learning in such dynamic settings where entity types are variable at each episode. The improvements that ROMA, Qatten, and QTRAN seen in other settings over QMIX, do not appear to manifest themselves in this setting. Moreover, the entity aggregation method of EMP does not improve performance over the standard MHA module that we use, likely due to the fact that EMP is most effective in settings where partial observability is a major hindrance to successful task completion. In this way, the target of EMP and REFIL are
opposite, as the goal of REFIL is to ignore extraneous information when possible during training to improve knowledge transfer.
In order to understand the role of training as an auxiliary objective (rather than entirely replacing the objective) we vary the value of λ to interpolate between two modes: λ = 0 is simply QMIX (Attention), while λ = 1 trains exclusively with random factorization. Our results (Figure 5) show that, similar to regularization methods such as Dropout (Srivastava et al., 2014), there is a sweet spot where performance is maximized before collapsing catastrophically. Training exclusively with random factorization does not learn anything significant. This failure is likely due to the fact that we use the full context in our targets for learning with imagined scenarios as well as when executing our policies, so we still need to learn with it in training.
Finally, we consider a qualitative experiment to highlight the sort of common patterns that REFIL is able to leverage (Figure 6). Zealots (the only melee unit present) are weak to Colossi, so they learn to hang back and let other units engage first. Then, they jump in and intercept the enemy Zealots while all other enemy units are preoccupied, leading to a common pattern of a Zealot vs. Zealot skirmish (highlighted at t=15). REFIL enables behaviors learned in these types of sub-groups to be applied more effectively across all unique unit type configurations. By sampling groups from all entities randomly, we will occasionally end up with sub-groups that include only Zealots, and the value function predictions learned in these sub-groups can be applied not only to the episode at hand, but to any episode where a similar pattern emerges.
5 RELATED WORK
Multi-agent reinforcement learning (MARL) is a broad field encompassing cooperative (Foerster et al., 2018; Rashid et al., 2018; Sunehag et al., 2018), competitive (Bansal et al., 2018; Lanctot
et al., 2017), and mixed (Lowe et al., 2017; Iqbal & Sha, 2019) settings. This paper focuses on cooperative MARL with centralized training and decentralized execution (Oliehoek et al., 2016, CTDE). Our approach utilizes value function factorization, an approach aiming to simultaneously overcome limitations of both joint Hausknecht (2016) and independent learning Claus & Boutilier (1998) paradigms. Early attempts at value function factorisation require apriori knowledge of suitable per-agent team reward decompositions or interaction dependencies. These include optimising over local compositions of individual Q-value functions learnt from individual reward functions (Schneider et al., 1999), as well as summing individual Q-functions with individual rewards before greedy joint action selection (Russell & Zimdars, 2003). Guestrin et al. (2002); Kok & Vlassis (2006) factorise the total Q-value function using coordination graphs based on interaction dependencies for the task at hand, similarly to max-plus approaches Kuyer et al. (2008); Pol & Oliehoek (2016). Recent approaches from cooperative deep multi-agent RL allow for value factorisations to be learnt from experience from a single team reward function and no prior knowledge of interaction dependencies. Value-Decomposition Networks (VDN) (Sunehag et al., 2018) decompose the joint Q-value function into a sum of local utility functions used for greedy action selection. QMIX Rashid et al. (2018) extends such additive decompositions to general monotonic functions. Several works extend QMIX to improve the expressivity of mixing functions (Son et al., 2019; Yang et al., 2020), learn latent embeddings to help exploration (Mahajan et al., 2019) or learn dynamic roles (Wang et al., 2020a), and encode knowledge of action semantics into network architectures (Wang et al., 2020b).
Several recent works have addressed the topic of generalization and transfer across environments with varying agent quantities, though the learning paradigms considered and assumptions made differ from our approach. Carion et al. (2019) devise an approach for assigning agents to tasks, assuming the existence of low-level controllers to carry out the tasks, and show that it can scale to much larger scenarios than those seen in training. Burden (2020) propose a transfer learning approach using convolutional neural networks and grid-based state representations to scale to scenarios of arbitrary size. Agarwal et al. (2019) introduce an entity message passing framework to enable agents to attend to specific entities, of which there may be a variable amount, based on their local context, similar to the multi-head attention module we use in our approach. Several approaches devise attention or graph-neural-network based models for handling variable sized inputs and focus on learning curricula to progress on increasingly large/challenging settings (Long et al., 2020; Baker et al., 2019; Wang et al., 2020c). In contrast to these curriculum learning approaches, we focus on training simultaneously on scenarios of varying sizes and specifically focus on developing a training paradigm for improving knowledge sharing across such settings to accelerate learning.
6 CONCLUSION
In this paper we consider a MARL setting where we aim to learn policies to control teams of agents in scenarios with varying types and quantities of entities. We propose REFIL, an approach that regularizes value functions to identify independence relationships between entities, in turn promoting generalization and knowledge transfer within and across multi-agent settings with varying quantities of agents. Our results show that our contributions yield performance improvements in complex cooperative tasks. In future work, we hope to explore alternative methods for learning independence relationships between entities beyond randomized partitions.
A ATTENTION LAYERS AND MODELS
Attention models have recently generated intense interest due to their ability to incorporate information across large contexts. Importantly for our purposes, they are able to process variable sized sets of inputs.
We now formally define the building blocks of our attention models. Given the input X , a matrix where the rows correspond to entities, we define an entity-wise feedforward layer as a standard fully connected layer that operates independently and identically over entities:
eFF(X;W , b) = XW + b>,X ∈ Rnx×d,W ∈ Rd×h, b ∈ Rh (4) Now, we specify the operation that defines an attention head, given the additional inputs of S ⊆ Z[1,nx], a set of indices that selects which rows of the input X are used to compute queries such that XS ∈ R|S|×d, and M , a binary obserability mask specifying which entities each query entity can observe (i.e. Mi,j = 1 when i ∈ S can incorporate information from j ∈ Z[1,n
x] into its local context):
Atten(S,X,M ;WQ,WK ,W V ) = softmax ( mask ( QK>√
h ,M
)) V ∈ R|S|×h
(5)
Q = XS,∗W Q,K = XWK ,V = XW V , M ∈ {0, 1}|S|×nx ,WQ,WK ,W V ∈ Rd×h
(6)
The mask(Y ,M) operation takes two equal sized matrices and fills the entries of Y with −∞ in the indices whereM is equal to 0. After the softmax, these entries become zero, thus preventing the attention mechanism from attending to specific entities. This masking procedure is used in our case to uphold partial observability. Only one attention layer is permitted in the decentralized execution setting; otherwise information from unseen agents can be propagated through agents that are seen. WQ, WK , and W V are all learnable parameters of this layer. Queries, Q, can be thought of as vectors specifying the type of information that an entity would like to select from others, while keys, K, can be thought of as specifying the type of information that an entity possesses, and finally, values, V , hold the information that is actually shared with other entities.
We define multi-head-attention as the parallel computation of attention heads as such: MHA (S,X,M) = concat ( Atten ( S,X,M ;WQj ,WKj ,W Vj ) , j ∈ ( 1 . . . nh )) (7)
The size of the parameters of an attention layer does not depend on the number of input entities. Furthermore, we receive an output vector for each query vector.
B AUGMENTING QMIX WITH ATTENTION
The standard QMIX algorithm relies on a fixed number of entities in three places: inputs of the agent-specific utility functions Qa, inputs of the hypernetwork, and the number of utilities entering the mixing network, that is, the output of the hypernetwork. QMIX uses multi-layer perceptrons for which all these quantities have to be of fixed size. In order to adapt QMIX to the variable agent quantity setting, such that we can apply a single model across all episodes, we require components that accept variable sized sets of entities as inputs. By utilizing attention mechanisms, we can design components that are no longer dependent on a fixed number of entities taken as input. We define the following inputs: XEei := s e i , 1 ≤ i ≤ d, e ∈ E ;Mµae := µ(sa, se), a ∈ A, e ∈ E . The matrixXE is the global state s reshaped into a matrix with a row for each entity, andMµ is a binary observability matrix which enables decentralized execution, determining which entities are visible to each agent.
B.1 UTILITY NETWORKS
While the standard agent utility functions map a flat observation, whose size depends on the number of entities in the environment, to a utility for each action, our attention-utility functions can take in a variable sized set of entities and return a utility for each action. The attention layer output for agent a is computed as MHA ({a},X,Mµ), where X is an row-wise transformation of XE (e.g.,
an entity-wise feedforward layer). If agents share parameters, the layer can be computed in parallel for all agents by providing A instead of {a}, which we do in practice.
B.2 GENERATING DYNAMIC SIZED MIXING NETWORKS
Another challenge in devising a QMIX algorithm for variable agent quantities is to adapt the hypernetworks that generate weights for the mixing network. Since the mixing network takes in utilities from each agent, we must generate feedforward mixing network parameters that change in size depending on the number of agents present, while incorporating global state information. Conveniently, the number of output vectors of a MHA layer depends on the cardinality of input set S and we can therefore generate mixing parameters of the correct size by using S = A and concatenating the vectors to form a matrix with one dimension size depending on the number of agents and the other depending on the number of hidden dimensions. Attention-based QMIX (QMIX (Attention)) trains these models using the standard DQN loss in Equation 2.
Our two layer mixing network requires the following parameters to be generated: W1 ∈ R+(|A|×hm), b1 ∈ Rh m , w2 ∈ R+(h m), b2 ∈ R, where hm is the hidden dimension of the mixing network and |A| is the set of agents. Note from Eq. (5) that the output size of the layer is dependent on the size of the query set. As such, using attention layers, we can generate a matrix of size |A| × hm, by specifying the set of agents, A, as the set of queries S from Eq. (5). We do not need observability masking since hypernetworks are only used during training and can be fully centralized. For each of the four components of the mixing network (W1, b1,w2, b2), we introduce a hypernetwork that generates parameters of the correct size. Thus, for the parameters that are vectors (b1 andw2), we average the matrix generated by the attention layer across the |A| sized dimension, and for b2, we average all elements. This procedure enables the dynamic generation of mixing networks whose input size varies with the number of agents. Assuming q = [Q1(τ1, u1), . . . , Qn(τn, un)], then Qtot is computed as:
Qtot(s, τ,u) = σ((q>W1) + b > 1 )w2 + b2 (8)
where σ is an ELU nonlinearity (Clevert et al., 2015).
C ENVIRONMENT DETAILS
C.1 STARCRAFT WITH VARIABLE AGENTS AND ENEMIES
The standard version of SMAC loads map files with pre-defined and fixed unit types, where the global state and observations are flat vectors with segments corresponding to each agent and enemy. Partial observability is implemented by zeroing out segments of the observations corresponding to unobserved agents. The size of these vectors changes depending on the number of agents placed in the map file. Furthermore, the action space consists of movement actions as well as separate actions to attack each enemy unit. As such the action space also changes as the number of agents changes.
Our version loads empty map files and programmatically generates agents, allowing greater flexibility in terms of the units present to begin each episode. The global state is split into a list of equal-sized entity descriptor vectors (for both agents and enemies), and partial observability is handled by generating a matrix that shows what entities are visible to each agent. The variable-sized action space is handled by randomly assigning each enemy a tag at the beginning of each episode and designating an action to attack each possible tag, of which there are a maximum number (i.e. the maximum possible number of enemies across all initializations). Agents are able to see the tag of the enemies they observe and can select the appropriate action that matches this tag in order to attack a specific enemy.
D EXPERIMENTAL DETAILS
Our experiments were performed on a desktop machine with a 6-core Intel Core i7-6800K CPU and 3 NVIDIA Titan Xp GPUs, and a server with 2 16-core Intel Xeon Gold 6154 CPUs and 10 NVIDIA Titan Xp GPUs. Each experiment is run with 8 parallel environments for data collection and a single GPU. REFIL takes about 24 hours to run for 10M steps on STARCRAFT. QMIX (Attention) takes
about 16 hours for the same number of steps on STARCRAFT. Reported times are on the desktop machine and the server runs approximately 15% faster due to more cores being available for running the environments in parallel.
E HYPERPARAMETERS
Hyperparameters were based on the PyMARL (Samvelyan et al., 2019) implementation of QMIX and are listed in Table 2. All hyperparameters are the same in all STARCRAFT settings. Since we train for 10 million timesteps (as opposed to the typical 2 million in standard SMAC), we extend the epsilon annealing period (for epsilon-greedy exploration) from 50,000 steps to 500,000 steps. For hyperparameters new to our approach (hidden dimensions of attention layers, number of attention heads, λ weighting of imagined loss), the specified values in Table 2 were the first values tried, and we found them to work well. The robustness of our approach to hyperparameter settings, as well as the fact that we do not tune hyperparameters per environment, is a strong indicator of the general applicability of our method.
F ADDITIONAL RESULTS
We test a modified non-attention version of our approach along with state of the art methods on the standard version of SMAC, where entity types are constant at the start of each episode. Since the number and type of agents and enemies is constant at each episode, observations and states can be represented as fixed-size vectors. We can thus use MLPs as models (as is standard in the literature) for these tasks and adapt our approach to suit this setting while comparing to unmodified versions of existing approaches. Rather than masking an attention mechanism, we simply zero out the features in the observations and states that correspond to entities we would like to mask out. These experiments are performed in order to compare our approach to results validated in the literature.
We compare against QMIX (Rashid et al., 2018) and VDN (Sunehag et al., 2018), as well as an ablation of our approach that uses additive mixing (a la VDN) of entity partition factors instead of a mixing network which we call REFIL (VDN). We use the architectures and hyperparameters from the QMIX (Rashid et al., 2018) paper in these settings.
Results can be found in Figure 7. While we expect REFIL to be most effective in the setting of varying types and quantities of agents, we still find that it improves on QMIX in 2 of the 3 scenarios tested. In the standard SMAC benchmarks, we find our approach is able to match or outperform the best baseline across all settings. Specifically, our factorization method (which builds on QMIX) improves the performance of QMIX in 2 of 3 settings tested. As far as we are aware REFIL outperforms all reported results in the literature on “2c vs 64 zg” (clasified as a “hard” task in SMAC). The relative improvement over QMIX, combined with the fact that it does not ever appear to hurt performance, indicates that the benefits of our method are not limited to settings with varying types and quantities of agents, though the positive effects are more pronounced there. | 1. What is the focus of the paper regarding multi-agent reinforcement learning?
2. What are the strengths of the proposed approach, particularly in handling varying types and numbers of agents?
3. What are the concerns regarding the reliance on Q_O and its implications for disentangling value predictions?
4. How effective is the random sampling scheme in real-world scenarios like football, where some agents have specific roles or objectives?
5. Are there any additional experiments or modifications that could enhance the effectiveness and practicality of the proposed method? | Review | Review
Summary: This paper proposes to incorporate a masked attention mechanism in QMIX for value function factorization to disentangle value predictions from irrelevant agents/entities. The masking is based on a random sampling from the whole set of agents to from random subsets, based on which it can compute within-group and without-group Q-functions. The method is able to handle varying types and number of agents. The paper conducts experiments on a simple game to understand the effect, and then test on 3 SMAC games, which shows the effectiveness of the proposed REFIL method.
Strong points:
The paper is well-written and clear.
The paper studies an important topic in MARL, i.e., how to deal with varying types and number of agents, and propose a simple yet effective approach, which incorporates attention mechanism in QMIX with random masking.
Experiments on several games illustrate the effectiveness of the method, with proper ablation study to understand the importance of each component (attention, mixing network, random masking).
Concerns: The main focus of the paper is to “disentangle value predictions from irrelevant entities”. However, not only does REFIL relies on Q_I (within-group Q-function), but also it relies on Q_O (without group Q-function). If the agent successfully learn this neglect of irrelevant entities, focusing on Q_I would be enough, without triggering additional computation of unnecessary Q_O. Could authors better explain this? In addition, consider the breakaway example, as the attacker has only to focus on the goal keeper, is the random sampling scheme from all agents effective compared with counterparts that only need to focus on the goalkeeper? Could authors conduct additional experiments on football to better support the claim? |
ICLR | Title
Randomized Entity-wise Factorization for Multi-Agent Reinforcement Learning
Abstract
Real world multi-agent tasks often involve varying types and quantities of agents and non-agent entities; however, agents within these tasks rarely need to consider all others at all times in order to act effectively. Factored value function approaches have historically leveraged such independences to improve learning efficiency, but these approaches typically rely on domain knowledge to select fixed subsets of state features to include in each factor. We propose to utilize value function factoring with random subsets of entities in each factor as an auxiliary objective in order to disentangle value predictions from irrelevant entities. This factoring approach is instantiated through a simple attention mechanism masking procedure. We hypothesize that such an approach helps agents learn more effectively in multi-agent settings by discovering common trajectories across episodes within sub-groups of agents/entities. Our approach, Randomized Entity-wise Factorization for Imagined Learning (REFIL), outperforms all strong baselines by a significant margin in challenging StarCraft micromanagement tasks.
N/A
1 INTRODUCTION
Many real-world multi-agent tasks contain scenarios in which an agent must deal with varying numbers and/or types of cooperative agents, antagonist enemies or other entities. Agents, however, can often select their optimal actions while ignoring a subset of agents/entities. For example, in the sport of soccer, a “breakaway” occurs when an attacker with the ball passes the defense and only needs to beat the goalkeeper in order to score (see Figure 1). In this situation, only the opposing goalkeeper is immediately relevant to the attacker’s success, so the attacker can safely ignore players other than the goalkeeper for the time being. By ignoring irrelevant context, the attacker can generalize this experience better to its next breakaway. Furthermore, soccer takes many forms, from casual 5 vs. 5 to full scale 11 vs. 11 matches, and breakaways occur in all. If agents can identify independent patterns of
behavior such as breakaways, they should be able to learn more efficiently as well as share their experiences across all forms of soccer.
Value function factoring approaches attempt to leverage independences between agents, such as those in our soccer example, by learning value functions as a combination of independent factors that depend on disjunct subsets of the state and action spaces (Koller & Parr, 1999). These subsets are typically fixed in advance using domain knowledge about the problem at hand, and thus are not scalable to complex domains where dependencies are unknown and may shift over time. Recent approaches in cooperative deep multi-agent reinforcement learning (MARL) factor value functions into separate components for each agent’s action and observation space in order to enable decentralized execution (e.g., VDN (Sunehag et al., 2018), QMIX (Rashid et al., 2018)). These approaches learn a utility function for each agent that only depends on the agent’s own action and its observations. The global Q-value is then predicted as some monotonic combination of these utilities in order to allow agents to greedily select their actions with local information while maximizing the
global Q. These approaches are able to effectively leverage independence between agents’ local actions and observations, however, we note that observable entities are provided by the environment and are not all necessarily relevant to an agent’s value function.
We build on these recent approaches by additionally factoring the observation space of each agent into factors for sub-groups of observed entities. Unlike classic works which factor the state or observation spaces, our work does not depend on fixed subsets of features designated through domain knowledge. Instead, we propose to randomly select sub-groups of observed entities and “imagine” the predicted utilities within these groups for each agent. These terms will not account for potential interactions outside of the groups, so we include additional factors that estimate the effect of the entities outside of each sub-group on each agent’s utility. In order to estimate the true returns, we combine all factors using a mixing network (as in QMIX, Rashid et al., 2018), which allows our model to weight factors based on the full state context. We hypothesize this approach is beneficial for two reasons: 1) randomly partitioning entities and predicting returns from disjunct factors allows our model to explore all possible independence relationships among agents and entities, teaching agents to ignore irrelevant context when possible and 2) by teaching our models when they can ignore irrelevant context, they will learn more efficiently across varied settings that share common patterns of behavior, such as breakaways in soccer. The loss for training randomized factorization is added to the QMIX loss (i.e., using full observations) as an auxiliary objective. Our reasoning is again twofold: 1) we must learn the true returns to use as a target prediction for a Q-learning loss. 2) we do not know a priori which entities are unnecessary and thus need to learn policies that act on full observations.
Our entity-wise factoring procedure can be implemented easily in practice by using a simple masking procedure in attention-based models. Furthermore, by leveraging attention models, we can apply our approach to domains with varying entity quantities. Just as a soccer agent experiencing a breakaway can generalize their behavior across settings (5 vs. 5, 11 vs. 11, etc.) if they ignore irrelevant context, we hypothesize that our approach will improve performance across settings with variable agent and entity configurations. We propose Randomized Entity-wise Factorization for Imagined Learning (REFIL) and test on complex StarCraft Multi-Agent Challenge (SMAC) (Samvelyan et al., 2019) tasks with varying agent types and quantities, finding it attains improved performance over state-of-the-art methods.
2 BACKGROUND AND PRELIMINARIES
In this work, we consider the decentralized partially observable Markov decision process (DecPOMDP) (Oliehoek et al., 2016), which describes fully cooperative multi-agent tasks. Specifically, we utilize the setting of Dec-POMDPs with entities (Schroeder de Witt et al., 2019).
Dec-POMDPs with Entities are described as tuples: (S,U,O,P , r , E ,A,Φ, µ). E is the set of entities in the environment. Each entity e has a state representation se, and the global state is the set s = {se|e ∈ E} ∈ S. Some entities can be agents a ∈ A ⊆ E . Non-agent entities are parts of the environment that are not controlled by learning policies (e.g., landmarks, obstacles, agents with fixed behavior). The state features of each entity comprise of two parts: se = [fe, φe] where fe represents the description of an entity’s current state (e.g., position, orientation, velocity, etc.) while φe ∈ Φ represents the entity’s type (e.g., outfield player, goalkeeper, etc.), of which there are a discrete set. An entity’s type affects the state dynamics as well as the reward function and, importantly, it remains fixed for the duration of the entity’s existence. Not all entities may be visible to each agent, so we define a binary observability mask: µ(sa, se) ∈ {1, 0}, where agents can always observe themselves µ(sa, sa) = 1,∀a ∈ A. Thus, an agent’s observation is defined as oa = {se|µ(sa, se) = 1, e ∈ E} ∈ O. Each agent a can execute actions ua, and the joint action of all agents is denoted as u = {ua|a ∈ A} ∈ U. P is the state transition function which defines the probability P(s′|s,u). r(s,u) is the reward function which maps the global state and joint actions to a single scalar reward.
We do not consider entities being added during an episode, but they may become inactive (e.g., a unit dying in StarCraft) in which case they no longer affect transitions and rewards. Since s and u are sets, their ordering does not matter, and our modeling construct should account for this (e.g., by modeling with permutation invariance/equivariance (Lee et al., 2019)). In many domains, the set of entity types present {φe|e ∈ E} is fixed across episodes. We are particularly interested in cases where quantity and types of entities are varied between episodes, as identifying independence relationships between entities is crucial to generalizing experience effectively in these cases.
Learning for Dec-POMDPs We aim to learn a set of policies that maximize expected discounted reward (returns) in some MDP.Q-learning is specifically concerned with learning an accurate actionvalue function Qtot (defined below), and using this function to select the actions that maximize expected returns. The optimal Q-function for the Dec-POMDP setting is defined as:
Qtot(s,u) := E [ ∞∑ t=0 γt r(st,ut) ∣∣∣ s0=s, u0=u, st+1∼P (·|st,ut) ut+1=arg maxQ tot(st+1,·) ] = r(s,u) + γ E [ maxQtot(s′, ·) | s′∼P (·|s,u) ] . (1)
Partial observability is typically handled by using the history of actions and observations as a proxy for state, typically processed by a recurrent neural network (RNN, Hausknecht & Stone, 2015): Qtotθ (τt,ut) ≈ Qtot(st,ut), where the trajectory (i.e., action observation history) is τat := (oa0 , u a 0 , . . . , o a t ) and τt := {τat }a∈A.
Work in deep reinforcement learning (Mnih et al., 2015) has popularized the use of neural networks as function approximators for learning Q-functions that are trained by minimizing the loss function:
L(θ) := E [( rt + γQ tot θ̄ ( τt+1, arg maxQ tot θ (τt+1, ·) )︸ ︷︷ ︸ ytott −Qtotθ (τt,ut) )2∣∣∣ (τt,ut, rt, τt+1) ∼ D] , (2) where θ̄ are the parameters of a target network that is copied from θ periodically to improve stability (Mnih et al., 2015) and D is a replay buffer (Lin, 1992) that stores transitions collected by an exploratory policy (typically -greedy). Double deep Q-learning (van Hasselt et al., 2016) mitigates overestimation of the learned values by using actions that maximize Qtotθ for the target network Q tot θ̄ .
Value Function Factorization Centralized training for decentralized execution (CTDE) has been a major focus in recent efforts in deep multi-agent RL (Lowe et al., 2017; Foerster et al., 2018; Sunehag et al., 2018; Rashid et al., 2018; Iqbal & Sha, 2019). Some work achieves CTDE by introducing methods for factoring Q-functions into monotonic combinations of per-agent utilities, with each depending only on a single agent’s history of actions and observations Qa(τa, ua). This factorization allows agents to independently maximize their local utility functions in a decentralized manner with their selected actions combining to form the optimal joint action. This factored representation can only represent a limited subset of all possible value functions (Böhmer et al., 2020); however, these methods tend to perform better empirically than those that learn unfactored joint action value functions, most likely because they exploit independence properties among agents (Oliehoek et al., 2008). Sunehag et al. (2018) introduce value decomposition networks (VDN) which decompose the total Q-value as a sum of per-agent utilities: Qtot(τ ,u) := ∑ aQ
a(τa, ua). QMIX (Rashid et al., 2018) extends this approach to use a more expressive factorization. We describe QMIX and how we build our randomized factorization approach on top of it in Section 3.1.
Attention Mechanisms for MARL Attention models have recently generated intense interest due to their ability to incorporate information across large contexts, including in the MARL literature (Jiang & Lu, 2018; Iqbal & Sha, 2019; Long et al., 2020). Importantly for our purposes, they are able to process variable sized sets of fixed length vectors (in our case entities). At the core of these models is a parameterized transformation known as multi-head attention (Vaswani et al., 2017). This transformation allows entities to selectively extract information from other entities based on their local context.
We define X as a matrix where each row corresponds to an entity (either its state representation or a transformed representation of it). The global state s can be represented in matrix form as XE where Xe,∗ = se. Our models consist of entity-wise feedforward layers (denoted as eFF(X)) and multi-head attention layers (denoted as MHA (A,X,M)). Entity-wise feedforward layers apply an identical linear transformation to all input entities. Multi-head attention layers serve as a mechanism to integrate information across entities. These take in three arguments: the set of agents for which to compute an output vector A, the matrix X ∈ R|E|×d where d is the dimensionality of the input representations, and a maskM ∈ R|A|×|E|. The layer outputs a matrixH ∈ R|A|×h where h is the hidden dimension of the layer. The rowHa,∗ corresponds to a weighted sum of linearly transformed representations from all entities selected by agent a. Importantly, if the entry of the maskMa,e = 0, then entity e’s representation cannot be included in Ha,∗. Masking serves two important purposes for us: 1) It enables decentralized execution by providing the mask Mµa,e = µ(s
a, se), such that agents can only see entities observable by them in the environment, and 2) It enable us to “imagine” the returns among sub-groups of entities. We integrate entity-wise feedforward layers and multi-
head attention into QMIX in order to adapt it to settings where the number of agents and entities is variable and build our approach from there. The exact process of computing attention layers, as well as the specifics of our attention-augmented version of QMIX are described in detail in the Appendix.
3 RANDOMIZED ENTITY-WISE FACTORIZATION FOR IMAGINED LEARNING We now introduce our method, Randomized Entity-wise Factorization for Imagined Learning (REFIL). As discussed in Section 2, value function factorization approaches for cooperative deep MARL are motivated by their ability to exploit independence between agents while enabling decentralized execution with centralized training. We note that an agent’s choice of optimal actions is often independent of a subset of its observed entities (cf. soccer breakaway example from Section 1), in addition to the choice of other agents’ actions. Furthermore, we conjecture that agents robust to irrelevant entities should be more effective in dynamic environments with variable numbers of agents, as they are better able to identify shared patterns of behavior (e.g., breakaways exist in all forms of soccer). We do not know a priori which entities an agent can disregard, so we must consider all possible sub-groups of entities. As such, we propose to factor value functions by imagining returns in random sub-groups.
3.1 METHOD
QMIX (Rashid et al., 2018) relaxes the representational constraints of VDN (Sunehag et al., 2018), by allowing the joint value function Qtot to be a non-linear monotonic function with respect to the agent-specific utilitiesQa: Qtot = g ( Q1(τ1, u1; θQ), . . . , Q |A|(τ |A|, u|A|; θQ); θg ) . The parameters of the mixing function θg are generated by a hyper-network (Ha et al., 2017) conditioning on the global state s: θg = h(s; θh). Every state can therefore have a different mixing function, but the mixing’s monotonicity maintains decentralizability, as agents can maximize Qtot without communication. All parameters θ = {θQ, θh} are trained with the DQN loss of Equation 2. We extend QMIX with attention layers both to encode variable sized sets of entities observed by each per-agent utility Qa and to mix the utilities of all agents a ∈ A. Partial observability is implemented by a mask Mµae = µ(s
a, se),∀a ∈ A,∀e ∈ E that is provided to attention layers as described in section 2. Building on QMIX, for each agent we generate a separate utility that only observes the state features of agents within its randomly selected sub-group: QaI (τ a I , u
a), as well as a term that accounts for interactions outside of its group: QaO(τ a O, u
a), then mixing these 2n (2 for each agent) utilities to form Qtot. Importantly, since the mixing network is generated by the full state context, our model can weight factors contextually. For example, if agent a’s sampled sub-group contains all relevant information to compute its utility such that QaI ≈ Qa, then the mixing network can weight QaI more heavily than QaO. Otherwise, the network learns to balance Q a I and Q a O for each agent, using the full state as context, in order to estimateQtot. We train with these random factorizations in addition to the original QMIX objective. Treating factorization as auxiliary task, rather than as a representational constraint, allows our model to retain the expressivity of QMIX value functions (without sub-group partitions) while exploiting the potential independence between agents and other entities. We note that our auxiliary objective is only used in training, and execution in the environment does not use random factorization.
3.2 IMPLEMENTATION
The mechanism behind our entity-wise factorization relies on a simple attention masking procedure. In order to compute in-group utilities QaI (τ a I , u a) and out-group utilities QaO(τ a O, u
a), we first randomly partition all entities in E into two disjunct groups (held fixed for an episode), indicated by a random binary1 vector m ∈ {0, 1}|E|. The entry me determines whether entity e is in the first group, and we can take the negation ¬me to represent whether e is in the second group. The subset of all agents is denoted as mA := [ma]a∈A. From these vectors, we can construct attention masks M ∈ R|A|×|E|. For example, using the mask M1 = mAm>, would prevent agents in the first group from “seeing” outside their group sinceM1a,e = 1 only if agent a and entity e are in the same group. This can be added to a similarly produced mask M2 = ¬mA ¬m> to create MI , a mask that only allows all agents to see the entities within their distinct groups. We construct masks for agents to see within (MI ) and out of (MO) their groups, then combine with observability masks Mµ as such: MµI := M
µ∧MI ,MµO := Mµ∧MO ,withMI := mAm>∨¬mA¬m> ,MO := ¬MI . (3) 1 We first draw p ∈ (0, 1) uniformly, followed by |E| independent draws from a Bernoulli(p) distribution.
The entryMµI [a, e] determines both whether agent a can see entity e and whether entity e is in agent a’s group; the entry MµO[a, e] is the same but for entities out of a’s group. We can use these masks in our attention mechanisms to compute QaI (τ a I , u
a), which represents the predicted utility of agent a within its group and QaO(τ a O, u
a), a residual term that accounts for the utility of interactions that a would have with the other group.
Given each agent’s predicted utility factors for both in-group and out-of-group, we combine these into a Qtot such that we can use the target from the full scenario (ytott in (2)) using a mixing network as in QMIX. This network’s first layer typically takes n inputs, one for each agent. Since we have 2n factors, we simply concatenate two generated versions of the input layer (using MI and MO). We then apply the network to the concatenated utilities QaI (τ a I , u a) and QaO(τ a O, u
a) of all agents a, to compute the predicted value Qtotaux. This procedure is visualized in Figure 2 and described in more detail in the Appendix.
Our novel approach REFIL uses Qtotaux in place of Qtot in the DQN loss of (2) to get the auxiliary loss Laux. Our total loss combines both real and auxiliary losses: L := (1−λ)LQ +λLaux, where λ is a hyper-parameter. In practice, this procedure requires two additional passes through the network (with MµO and M µ I as masks instead of M
µ) per training step. These additional passes can be parallelized by computing all necessary quantities in one batch on GPU. It is feasible to split entities into an arbitrary number i of random sub-groups without using more computation by sampling several disjunct vectors mi and combining them them in the same way as we combine m and ¬m in Equation 3 to form MI and MO. Doing so could potentially bias agents towards considering smaller subsets of entities.
4 EXPERIMENTAL RESULTS
In our experiments, we aim to justify the main components of REFIL: 1) randomized sub-group factorization and 2) training as an auxiliary objective. We begin with experiments in a simple domain we construct such that agents’ decisions rely only on a subset of all entities, and that subset is known, so we can compare our approach to approaches that use this domain knowledge. Then, we move
on to testing on complex StarCraft micromanagement tasks to demonstrate our method’s ability to scale to complex domains.
4.1 GROUP MATCHING GAME
We construct a group matching game, pictured in Figure 3a, where each agent only needs to consider a subset of other agents to act effectively and we know that subset as ground-truth (unlike in more complex domains such as StarCraft). As such, the task can be described as follows: Agents (of which there are na) are randomly placed in one of nc cells and assigned to one of ng groups (represented by the different colors) at the start of each episode. They can choose from three actions: move clockwise, stay, and move counter-clockwise. Their ultimate goal is to be located in the same cell as the rest of their group members, at which point an episode ends. There is no restriction on which cell agents form a group in (e.g., both groups can form in the same cell). All agents share a reward of 2.5 when any group is completed (and an equivalent penalty for a formed group breaking) as well as a penalty of -0.1 for each time step in order to encourage agents to solve the task as quickly as possible. Agents’ entity-state descriptions se include the cell that the agent is currently occupying as well as the group it belongs to (both one-hot encoded), and the task is fully-observable. Notably, agents can act optimally while ignoring agents outside of their group.
Ground-truth knowledge of relevant entities enables us to disentangle two aspects of our approach: the use of entity-wise factorization in general and specifically using randomly selected factors. We construct two approaches that use this knowledge to build factoring masks MI and MO which are used in place of randomly sampled groups (otherwise the methods are identical to REFIL). REFIL (Fixed Oracle) directly uses the ground truth group assignments (different at each episode) to build masks. REFIL (Randomized Oracle) randomly samples sub-groups from the ground truth groups only, rather than from all possible entities. We additionally train REFIL and QMIX (Attention) (i.e., REFIL with no auxiliary loss).
Figure 3b shows that using domain knowledge alone does not significantly improve performance in this domain (QMIX (Attention) vs. REFIL (Fixed Oracle)). In fact our randomized factorization approach is able to surpass the use of domain knowledge. The randomization in REFIL appears therefore to be crucial. One hypothesis for this phenomenon is that randomization of sub-group factors enables better knowledge sharing across diverse settings (in this case unique group assignments). For example, the situation of two agents from the same group being located in adjacent cells occurs within all possible group assignments. If sampling randomly, our approach will occasionally sample these two agents alone in their own group. Even if the rest of the context in a given episode has never been seen by the model before, as long as this sub-scenario has been seen, the model has some indication of the value associated with each action. Even when restricting the set of entities to form sub-groups with to those that we know can be relevant to each agent (REFIL (Randomized Oracle)) we find that performance does not significantly improve. These results suggest that randomized sub-group formation for REFIL is a viable strategy (vs attempting to learn which entities are relevant and selecting sub-groups from there), and the main benefit of our approach is to promote generalization across scenarios by breaking value function predictions into reusable components.
4.2 STARCRAFT
We next test on the StarCraft multi-agent challenge (SMAC) (Samvelyan et al., 2019). The tasks in SMAC involve micromanagement of units in order to defeat a set of enemy units in battle. Specifically, we extend SMAC to settings with variable types and quantities of agents. We hypothesize that our approach is especially beneficial in this setting, as it should encourage of models to identify independence between entities and generalize to more diverse settings as a result. The dynamic setting requires some small modifications to SMAC, though we aim to change the environment as little as possible to maintain the challenging nature of the tasks. In the standard version of SMAC, both state and action spaces depend on a fixed number of agents and enemies, so our modifications, discussed in detail in the appendix, alleviate these problems.
In our tests we evaluate on three settings we call 3-8sz, 3-8csz, and 3-8MMM. 3-8sz pits symmetrical teams of between 3 and 8 agents against each other where the agents are a combination of Zealots and Stalkers (similar to the 2s3z and 3s5z tasks in the original SMAC). 3-8csz pits symmetrical teams of between 0 and 2 Colossi and 3 to 6 Stalkers/Zealots against each other (similar to 1c3s5z). 3-8MMM pits symmetrical teams of between 0 and 2 Medics and 3 to 6 Marines/Marauders against each other (similar to MMM and MMM2). As a sanity check, we additionally modify our approach to work with non-attention models such that we can test on the original SMAC tasks against existing methods. These results (located in the appendix) show that we can significantly improve on QMIX (previously state-of-the-art) in 2 of 3 settings tested.
Ablations and Baselines We introduce several ablations of our method, as well as adaptations of existing methods to handle variable sized inputs. These comparisons are summarized in Table 1. QMIX (Attention) is our method without the auxiliary loss. REFIL (VDN) is our approach using summation to combine all factors (a la Value Decomposition Networks (Sunehag et al., 2018)) rather than a non-linear monotonic mixing network. VDN (Attention) does not include the auxiliary loss and uses summation as factor mixing. QMIX (Mean Pooling) is QMIX (Attention) with attention layers replaced by mean pooling. We also test max
pooling but find the performance to be marginally worse than mean pooling. Importantly, for pooling layers we add entity-wise linear transformations prior to the pooling operations such that the total number of parameters is comparable to attention layers.
For baselines we consider some follow-up works to QMIX that attempt to improve the mixing network to be more expressive: QTRAN (Son et al., 2019) and Qatten (Yang et al., 2020). We additionally consider an alternative mechanism for aggregating information across variable sets of entities, known as Entity Message Passing (EMP) (Agarwal et al., 2019). We specifically use the restricted communication setting where agents can only communicate with agents they observe, and we set the number of message passing steps to 3. Finally, we compare to a method that builds on QMIX by attempting to learn dynamic roles that depend on the context each agent observes: ROMA (Wang et al., 2020a). For all approaches designed for the standard SMAC setting, we extend them with the same multi-head attention architecture that our approach uses.
Results and Discussion Our results on challenges in dynamic STARCRAFT settings can be found in Figure 4. We find that REFIL outperforms all ablations consistently in these settings. REFIL (VDN) performs much worse than our approach and VDN (Attention), highlighting the importance of the mixing network to handle contextual dependencies between entity partitions. Since the trajectory of a subset of entities can play out differently based on the surrounding context, it’s important for our factorization approach to recognize and adjust for these situations. The mixing network handles these dependencies by a) incorporating global state information into the mixing procedure, and b) mixing utilities in a non-linear monotonic fashion, rather than summing as in VDN. As such, the increased representative capacity of the QMIX mixing network, relative to VDN, is crucial. The use of mean-pooling in place of attention also performs poorly, indicating that attention is valuable for aggregating information from variable length sets of entities.
With respect to the baselines, we also find that REFIL consistently outperforms other methods, highlighting the unique challenge of learning in such dynamic settings where entity types are variable at each episode. The improvements that ROMA, Qatten, and QTRAN seen in other settings over QMIX, do not appear to manifest themselves in this setting. Moreover, the entity aggregation method of EMP does not improve performance over the standard MHA module that we use, likely due to the fact that EMP is most effective in settings where partial observability is a major hindrance to successful task completion. In this way, the target of EMP and REFIL are
opposite, as the goal of REFIL is to ignore extraneous information when possible during training to improve knowledge transfer.
In order to understand the role of training as an auxiliary objective (rather than entirely replacing the objective) we vary the value of λ to interpolate between two modes: λ = 0 is simply QMIX (Attention), while λ = 1 trains exclusively with random factorization. Our results (Figure 5) show that, similar to regularization methods such as Dropout (Srivastava et al., 2014), there is a sweet spot where performance is maximized before collapsing catastrophically. Training exclusively with random factorization does not learn anything significant. This failure is likely due to the fact that we use the full context in our targets for learning with imagined scenarios as well as when executing our policies, so we still need to learn with it in training.
Finally, we consider a qualitative experiment to highlight the sort of common patterns that REFIL is able to leverage (Figure 6). Zealots (the only melee unit present) are weak to Colossi, so they learn to hang back and let other units engage first. Then, they jump in and intercept the enemy Zealots while all other enemy units are preoccupied, leading to a common pattern of a Zealot vs. Zealot skirmish (highlighted at t=15). REFIL enables behaviors learned in these types of sub-groups to be applied more effectively across all unique unit type configurations. By sampling groups from all entities randomly, we will occasionally end up with sub-groups that include only Zealots, and the value function predictions learned in these sub-groups can be applied not only to the episode at hand, but to any episode where a similar pattern emerges.
5 RELATED WORK
Multi-agent reinforcement learning (MARL) is a broad field encompassing cooperative (Foerster et al., 2018; Rashid et al., 2018; Sunehag et al., 2018), competitive (Bansal et al., 2018; Lanctot
et al., 2017), and mixed (Lowe et al., 2017; Iqbal & Sha, 2019) settings. This paper focuses on cooperative MARL with centralized training and decentralized execution (Oliehoek et al., 2016, CTDE). Our approach utilizes value function factorization, an approach aiming to simultaneously overcome limitations of both joint Hausknecht (2016) and independent learning Claus & Boutilier (1998) paradigms. Early attempts at value function factorisation require apriori knowledge of suitable per-agent team reward decompositions or interaction dependencies. These include optimising over local compositions of individual Q-value functions learnt from individual reward functions (Schneider et al., 1999), as well as summing individual Q-functions with individual rewards before greedy joint action selection (Russell & Zimdars, 2003). Guestrin et al. (2002); Kok & Vlassis (2006) factorise the total Q-value function using coordination graphs based on interaction dependencies for the task at hand, similarly to max-plus approaches Kuyer et al. (2008); Pol & Oliehoek (2016). Recent approaches from cooperative deep multi-agent RL allow for value factorisations to be learnt from experience from a single team reward function and no prior knowledge of interaction dependencies. Value-Decomposition Networks (VDN) (Sunehag et al., 2018) decompose the joint Q-value function into a sum of local utility functions used for greedy action selection. QMIX Rashid et al. (2018) extends such additive decompositions to general monotonic functions. Several works extend QMIX to improve the expressivity of mixing functions (Son et al., 2019; Yang et al., 2020), learn latent embeddings to help exploration (Mahajan et al., 2019) or learn dynamic roles (Wang et al., 2020a), and encode knowledge of action semantics into network architectures (Wang et al., 2020b).
Several recent works have addressed the topic of generalization and transfer across environments with varying agent quantities, though the learning paradigms considered and assumptions made differ from our approach. Carion et al. (2019) devise an approach for assigning agents to tasks, assuming the existence of low-level controllers to carry out the tasks, and show that it can scale to much larger scenarios than those seen in training. Burden (2020) propose a transfer learning approach using convolutional neural networks and grid-based state representations to scale to scenarios of arbitrary size. Agarwal et al. (2019) introduce an entity message passing framework to enable agents to attend to specific entities, of which there may be a variable amount, based on their local context, similar to the multi-head attention module we use in our approach. Several approaches devise attention or graph-neural-network based models for handling variable sized inputs and focus on learning curricula to progress on increasingly large/challenging settings (Long et al., 2020; Baker et al., 2019; Wang et al., 2020c). In contrast to these curriculum learning approaches, we focus on training simultaneously on scenarios of varying sizes and specifically focus on developing a training paradigm for improving knowledge sharing across such settings to accelerate learning.
6 CONCLUSION
In this paper we consider a MARL setting where we aim to learn policies to control teams of agents in scenarios with varying types and quantities of entities. We propose REFIL, an approach that regularizes value functions to identify independence relationships between entities, in turn promoting generalization and knowledge transfer within and across multi-agent settings with varying quantities of agents. Our results show that our contributions yield performance improvements in complex cooperative tasks. In future work, we hope to explore alternative methods for learning independence relationships between entities beyond randomized partitions.
A ATTENTION LAYERS AND MODELS
Attention models have recently generated intense interest due to their ability to incorporate information across large contexts. Importantly for our purposes, they are able to process variable sized sets of inputs.
We now formally define the building blocks of our attention models. Given the input X , a matrix where the rows correspond to entities, we define an entity-wise feedforward layer as a standard fully connected layer that operates independently and identically over entities:
eFF(X;W , b) = XW + b>,X ∈ Rnx×d,W ∈ Rd×h, b ∈ Rh (4) Now, we specify the operation that defines an attention head, given the additional inputs of S ⊆ Z[1,nx], a set of indices that selects which rows of the input X are used to compute queries such that XS ∈ R|S|×d, and M , a binary obserability mask specifying which entities each query entity can observe (i.e. Mi,j = 1 when i ∈ S can incorporate information from j ∈ Z[1,n
x] into its local context):
Atten(S,X,M ;WQ,WK ,W V ) = softmax ( mask ( QK>√
h ,M
)) V ∈ R|S|×h
(5)
Q = XS,∗W Q,K = XWK ,V = XW V , M ∈ {0, 1}|S|×nx ,WQ,WK ,W V ∈ Rd×h
(6)
The mask(Y ,M) operation takes two equal sized matrices and fills the entries of Y with −∞ in the indices whereM is equal to 0. After the softmax, these entries become zero, thus preventing the attention mechanism from attending to specific entities. This masking procedure is used in our case to uphold partial observability. Only one attention layer is permitted in the decentralized execution setting; otherwise information from unseen agents can be propagated through agents that are seen. WQ, WK , and W V are all learnable parameters of this layer. Queries, Q, can be thought of as vectors specifying the type of information that an entity would like to select from others, while keys, K, can be thought of as specifying the type of information that an entity possesses, and finally, values, V , hold the information that is actually shared with other entities.
We define multi-head-attention as the parallel computation of attention heads as such: MHA (S,X,M) = concat ( Atten ( S,X,M ;WQj ,WKj ,W Vj ) , j ∈ ( 1 . . . nh )) (7)
The size of the parameters of an attention layer does not depend on the number of input entities. Furthermore, we receive an output vector for each query vector.
B AUGMENTING QMIX WITH ATTENTION
The standard QMIX algorithm relies on a fixed number of entities in three places: inputs of the agent-specific utility functions Qa, inputs of the hypernetwork, and the number of utilities entering the mixing network, that is, the output of the hypernetwork. QMIX uses multi-layer perceptrons for which all these quantities have to be of fixed size. In order to adapt QMIX to the variable agent quantity setting, such that we can apply a single model across all episodes, we require components that accept variable sized sets of entities as inputs. By utilizing attention mechanisms, we can design components that are no longer dependent on a fixed number of entities taken as input. We define the following inputs: XEei := s e i , 1 ≤ i ≤ d, e ∈ E ;Mµae := µ(sa, se), a ∈ A, e ∈ E . The matrixXE is the global state s reshaped into a matrix with a row for each entity, andMµ is a binary observability matrix which enables decentralized execution, determining which entities are visible to each agent.
B.1 UTILITY NETWORKS
While the standard agent utility functions map a flat observation, whose size depends on the number of entities in the environment, to a utility for each action, our attention-utility functions can take in a variable sized set of entities and return a utility for each action. The attention layer output for agent a is computed as MHA ({a},X,Mµ), where X is an row-wise transformation of XE (e.g.,
an entity-wise feedforward layer). If agents share parameters, the layer can be computed in parallel for all agents by providing A instead of {a}, which we do in practice.
B.2 GENERATING DYNAMIC SIZED MIXING NETWORKS
Another challenge in devising a QMIX algorithm for variable agent quantities is to adapt the hypernetworks that generate weights for the mixing network. Since the mixing network takes in utilities from each agent, we must generate feedforward mixing network parameters that change in size depending on the number of agents present, while incorporating global state information. Conveniently, the number of output vectors of a MHA layer depends on the cardinality of input set S and we can therefore generate mixing parameters of the correct size by using S = A and concatenating the vectors to form a matrix with one dimension size depending on the number of agents and the other depending on the number of hidden dimensions. Attention-based QMIX (QMIX (Attention)) trains these models using the standard DQN loss in Equation 2.
Our two layer mixing network requires the following parameters to be generated: W1 ∈ R+(|A|×hm), b1 ∈ Rh m , w2 ∈ R+(h m), b2 ∈ R, where hm is the hidden dimension of the mixing network and |A| is the set of agents. Note from Eq. (5) that the output size of the layer is dependent on the size of the query set. As such, using attention layers, we can generate a matrix of size |A| × hm, by specifying the set of agents, A, as the set of queries S from Eq. (5). We do not need observability masking since hypernetworks are only used during training and can be fully centralized. For each of the four components of the mixing network (W1, b1,w2, b2), we introduce a hypernetwork that generates parameters of the correct size. Thus, for the parameters that are vectors (b1 andw2), we average the matrix generated by the attention layer across the |A| sized dimension, and for b2, we average all elements. This procedure enables the dynamic generation of mixing networks whose input size varies with the number of agents. Assuming q = [Q1(τ1, u1), . . . , Qn(τn, un)], then Qtot is computed as:
Qtot(s, τ,u) = σ((q>W1) + b > 1 )w2 + b2 (8)
where σ is an ELU nonlinearity (Clevert et al., 2015).
C ENVIRONMENT DETAILS
C.1 STARCRAFT WITH VARIABLE AGENTS AND ENEMIES
The standard version of SMAC loads map files with pre-defined and fixed unit types, where the global state and observations are flat vectors with segments corresponding to each agent and enemy. Partial observability is implemented by zeroing out segments of the observations corresponding to unobserved agents. The size of these vectors changes depending on the number of agents placed in the map file. Furthermore, the action space consists of movement actions as well as separate actions to attack each enemy unit. As such the action space also changes as the number of agents changes.
Our version loads empty map files and programmatically generates agents, allowing greater flexibility in terms of the units present to begin each episode. The global state is split into a list of equal-sized entity descriptor vectors (for both agents and enemies), and partial observability is handled by generating a matrix that shows what entities are visible to each agent. The variable-sized action space is handled by randomly assigning each enemy a tag at the beginning of each episode and designating an action to attack each possible tag, of which there are a maximum number (i.e. the maximum possible number of enemies across all initializations). Agents are able to see the tag of the enemies they observe and can select the appropriate action that matches this tag in order to attack a specific enemy.
D EXPERIMENTAL DETAILS
Our experiments were performed on a desktop machine with a 6-core Intel Core i7-6800K CPU and 3 NVIDIA Titan Xp GPUs, and a server with 2 16-core Intel Xeon Gold 6154 CPUs and 10 NVIDIA Titan Xp GPUs. Each experiment is run with 8 parallel environments for data collection and a single GPU. REFIL takes about 24 hours to run for 10M steps on STARCRAFT. QMIX (Attention) takes
about 16 hours for the same number of steps on STARCRAFT. Reported times are on the desktop machine and the server runs approximately 15% faster due to more cores being available for running the environments in parallel.
E HYPERPARAMETERS
Hyperparameters were based on the PyMARL (Samvelyan et al., 2019) implementation of QMIX and are listed in Table 2. All hyperparameters are the same in all STARCRAFT settings. Since we train for 10 million timesteps (as opposed to the typical 2 million in standard SMAC), we extend the epsilon annealing period (for epsilon-greedy exploration) from 50,000 steps to 500,000 steps. For hyperparameters new to our approach (hidden dimensions of attention layers, number of attention heads, λ weighting of imagined loss), the specified values in Table 2 were the first values tried, and we found them to work well. The robustness of our approach to hyperparameter settings, as well as the fact that we do not tune hyperparameters per environment, is a strong indicator of the general applicability of our method.
F ADDITIONAL RESULTS
We test a modified non-attention version of our approach along with state of the art methods on the standard version of SMAC, where entity types are constant at the start of each episode. Since the number and type of agents and enemies is constant at each episode, observations and states can be represented as fixed-size vectors. We can thus use MLPs as models (as is standard in the literature) for these tasks and adapt our approach to suit this setting while comparing to unmodified versions of existing approaches. Rather than masking an attention mechanism, we simply zero out the features in the observations and states that correspond to entities we would like to mask out. These experiments are performed in order to compare our approach to results validated in the literature.
We compare against QMIX (Rashid et al., 2018) and VDN (Sunehag et al., 2018), as well as an ablation of our approach that uses additive mixing (a la VDN) of entity partition factors instead of a mixing network which we call REFIL (VDN). We use the architectures and hyperparameters from the QMIX (Rashid et al., 2018) paper in these settings.
Results can be found in Figure 7. While we expect REFIL to be most effective in the setting of varying types and quantities of agents, we still find that it improves on QMIX in 2 of the 3 scenarios tested. In the standard SMAC benchmarks, we find our approach is able to match or outperform the best baseline across all settings. Specifically, our factorization method (which builds on QMIX) improves the performance of QMIX in 2 of 3 settings tested. As far as we are aware REFIL outperforms all reported results in the literature on “2c vs 64 zg” (clasified as a “hard” task in SMAC). The relative improvement over QMIX, combined with the fact that it does not ever appear to hurt performance, indicates that the benefits of our method are not limited to settings with varying types and quantities of agents, though the positive effects are more pronounced there. | 1. What is the main contribution of the paper, and how does it relate to multi-agent learning?
2. What are the strengths of the proposed method, particularly in its ability to handle irrelevant entities?
3. What are some concerns or limitations regarding the method's application to more than two groups?
4. How does the reviewer assess the clarity and quality of the paper's content, including the figures and environments used?
5. What are some suggestions for improving the paper, such as providing more detailed explanations or comparing the results to other relevant baselines? | Review | Review
The paper proposes a method for randomized factorization of multi-agents for efficient learning. The idea is inspired by the presence of irrelevant entities present in an agent's observational view and how removing those could aid the learning process. Agents are randomly divided into groups so that an agent can separately measure the influence of entities present in the same group and the entities present in the other groups. Since the groups are randomized, this helps the agent to create groups of variable size based on its utility prediction of in-group and out-of-group entities.
Although the paper only uses two groups for derivation and experiments, it claims that the same method can be applied to more than two groups but is yet to be demonstrated.
The paper is easy to read and the figures are excellent and self-explanatory. The environments chosen are also indicative of the importance of each component. The authors also found that randomized factoring performs better empirically than using domain knowledge while the state-of-the-art lies in the combination of the two. Further, the idea of training variable-sized hypernetwork is quite fascinating and fits well in the overall framework.
Some questions: Figure 3b does not show results till convergence, please put the entire plots.
I think it would be better if the SMAC setup is explained in more detail. I couldn't understand how the tagging is done (in Appendix). Does it deterministically tie an action to an enemy?
QTRAN has been shown to perform better than QMIX in competitive domains. I would encourage the authors to compare the results with this baseline too. |
ICLR | Title
Randomized Entity-wise Factorization for Multi-Agent Reinforcement Learning
Abstract
Real world multi-agent tasks often involve varying types and quantities of agents and non-agent entities; however, agents within these tasks rarely need to consider all others at all times in order to act effectively. Factored value function approaches have historically leveraged such independences to improve learning efficiency, but these approaches typically rely on domain knowledge to select fixed subsets of state features to include in each factor. We propose to utilize value function factoring with random subsets of entities in each factor as an auxiliary objective in order to disentangle value predictions from irrelevant entities. This factoring approach is instantiated through a simple attention mechanism masking procedure. We hypothesize that such an approach helps agents learn more effectively in multi-agent settings by discovering common trajectories across episodes within sub-groups of agents/entities. Our approach, Randomized Entity-wise Factorization for Imagined Learning (REFIL), outperforms all strong baselines by a significant margin in challenging StarCraft micromanagement tasks.
N/A
1 INTRODUCTION
Many real-world multi-agent tasks contain scenarios in which an agent must deal with varying numbers and/or types of cooperative agents, antagonist enemies or other entities. Agents, however, can often select their optimal actions while ignoring a subset of agents/entities. For example, in the sport of soccer, a “breakaway” occurs when an attacker with the ball passes the defense and only needs to beat the goalkeeper in order to score (see Figure 1). In this situation, only the opposing goalkeeper is immediately relevant to the attacker’s success, so the attacker can safely ignore players other than the goalkeeper for the time being. By ignoring irrelevant context, the attacker can generalize this experience better to its next breakaway. Furthermore, soccer takes many forms, from casual 5 vs. 5 to full scale 11 vs. 11 matches, and breakaways occur in all. If agents can identify independent patterns of
behavior such as breakaways, they should be able to learn more efficiently as well as share their experiences across all forms of soccer.
Value function factoring approaches attempt to leverage independences between agents, such as those in our soccer example, by learning value functions as a combination of independent factors that depend on disjunct subsets of the state and action spaces (Koller & Parr, 1999). These subsets are typically fixed in advance using domain knowledge about the problem at hand, and thus are not scalable to complex domains where dependencies are unknown and may shift over time. Recent approaches in cooperative deep multi-agent reinforcement learning (MARL) factor value functions into separate components for each agent’s action and observation space in order to enable decentralized execution (e.g., VDN (Sunehag et al., 2018), QMIX (Rashid et al., 2018)). These approaches learn a utility function for each agent that only depends on the agent’s own action and its observations. The global Q-value is then predicted as some monotonic combination of these utilities in order to allow agents to greedily select their actions with local information while maximizing the
global Q. These approaches are able to effectively leverage independence between agents’ local actions and observations, however, we note that observable entities are provided by the environment and are not all necessarily relevant to an agent’s value function.
We build on these recent approaches by additionally factoring the observation space of each agent into factors for sub-groups of observed entities. Unlike classic works which factor the state or observation spaces, our work does not depend on fixed subsets of features designated through domain knowledge. Instead, we propose to randomly select sub-groups of observed entities and “imagine” the predicted utilities within these groups for each agent. These terms will not account for potential interactions outside of the groups, so we include additional factors that estimate the effect of the entities outside of each sub-group on each agent’s utility. In order to estimate the true returns, we combine all factors using a mixing network (as in QMIX, Rashid et al., 2018), which allows our model to weight factors based on the full state context. We hypothesize this approach is beneficial for two reasons: 1) randomly partitioning entities and predicting returns from disjunct factors allows our model to explore all possible independence relationships among agents and entities, teaching agents to ignore irrelevant context when possible and 2) by teaching our models when they can ignore irrelevant context, they will learn more efficiently across varied settings that share common patterns of behavior, such as breakaways in soccer. The loss for training randomized factorization is added to the QMIX loss (i.e., using full observations) as an auxiliary objective. Our reasoning is again twofold: 1) we must learn the true returns to use as a target prediction for a Q-learning loss. 2) we do not know a priori which entities are unnecessary and thus need to learn policies that act on full observations.
Our entity-wise factoring procedure can be implemented easily in practice by using a simple masking procedure in attention-based models. Furthermore, by leveraging attention models, we can apply our approach to domains with varying entity quantities. Just as a soccer agent experiencing a breakaway can generalize their behavior across settings (5 vs. 5, 11 vs. 11, etc.) if they ignore irrelevant context, we hypothesize that our approach will improve performance across settings with variable agent and entity configurations. We propose Randomized Entity-wise Factorization for Imagined Learning (REFIL) and test on complex StarCraft Multi-Agent Challenge (SMAC) (Samvelyan et al., 2019) tasks with varying agent types and quantities, finding it attains improved performance over state-of-the-art methods.
2 BACKGROUND AND PRELIMINARIES
In this work, we consider the decentralized partially observable Markov decision process (DecPOMDP) (Oliehoek et al., 2016), which describes fully cooperative multi-agent tasks. Specifically, we utilize the setting of Dec-POMDPs with entities (Schroeder de Witt et al., 2019).
Dec-POMDPs with Entities are described as tuples: (S,U,O,P , r , E ,A,Φ, µ). E is the set of entities in the environment. Each entity e has a state representation se, and the global state is the set s = {se|e ∈ E} ∈ S. Some entities can be agents a ∈ A ⊆ E . Non-agent entities are parts of the environment that are not controlled by learning policies (e.g., landmarks, obstacles, agents with fixed behavior). The state features of each entity comprise of two parts: se = [fe, φe] where fe represents the description of an entity’s current state (e.g., position, orientation, velocity, etc.) while φe ∈ Φ represents the entity’s type (e.g., outfield player, goalkeeper, etc.), of which there are a discrete set. An entity’s type affects the state dynamics as well as the reward function and, importantly, it remains fixed for the duration of the entity’s existence. Not all entities may be visible to each agent, so we define a binary observability mask: µ(sa, se) ∈ {1, 0}, where agents can always observe themselves µ(sa, sa) = 1,∀a ∈ A. Thus, an agent’s observation is defined as oa = {se|µ(sa, se) = 1, e ∈ E} ∈ O. Each agent a can execute actions ua, and the joint action of all agents is denoted as u = {ua|a ∈ A} ∈ U. P is the state transition function which defines the probability P(s′|s,u). r(s,u) is the reward function which maps the global state and joint actions to a single scalar reward.
We do not consider entities being added during an episode, but they may become inactive (e.g., a unit dying in StarCraft) in which case they no longer affect transitions and rewards. Since s and u are sets, their ordering does not matter, and our modeling construct should account for this (e.g., by modeling with permutation invariance/equivariance (Lee et al., 2019)). In many domains, the set of entity types present {φe|e ∈ E} is fixed across episodes. We are particularly interested in cases where quantity and types of entities are varied between episodes, as identifying independence relationships between entities is crucial to generalizing experience effectively in these cases.
Learning for Dec-POMDPs We aim to learn a set of policies that maximize expected discounted reward (returns) in some MDP.Q-learning is specifically concerned with learning an accurate actionvalue function Qtot (defined below), and using this function to select the actions that maximize expected returns. The optimal Q-function for the Dec-POMDP setting is defined as:
Qtot(s,u) := E [ ∞∑ t=0 γt r(st,ut) ∣∣∣ s0=s, u0=u, st+1∼P (·|st,ut) ut+1=arg maxQ tot(st+1,·) ] = r(s,u) + γ E [ maxQtot(s′, ·) | s′∼P (·|s,u) ] . (1)
Partial observability is typically handled by using the history of actions and observations as a proxy for state, typically processed by a recurrent neural network (RNN, Hausknecht & Stone, 2015): Qtotθ (τt,ut) ≈ Qtot(st,ut), where the trajectory (i.e., action observation history) is τat := (oa0 , u a 0 , . . . , o a t ) and τt := {τat }a∈A.
Work in deep reinforcement learning (Mnih et al., 2015) has popularized the use of neural networks as function approximators for learning Q-functions that are trained by minimizing the loss function:
L(θ) := E [( rt + γQ tot θ̄ ( τt+1, arg maxQ tot θ (τt+1, ·) )︸ ︷︷ ︸ ytott −Qtotθ (τt,ut) )2∣∣∣ (τt,ut, rt, τt+1) ∼ D] , (2) where θ̄ are the parameters of a target network that is copied from θ periodically to improve stability (Mnih et al., 2015) and D is a replay buffer (Lin, 1992) that stores transitions collected by an exploratory policy (typically -greedy). Double deep Q-learning (van Hasselt et al., 2016) mitigates overestimation of the learned values by using actions that maximize Qtotθ for the target network Q tot θ̄ .
Value Function Factorization Centralized training for decentralized execution (CTDE) has been a major focus in recent efforts in deep multi-agent RL (Lowe et al., 2017; Foerster et al., 2018; Sunehag et al., 2018; Rashid et al., 2018; Iqbal & Sha, 2019). Some work achieves CTDE by introducing methods for factoring Q-functions into monotonic combinations of per-agent utilities, with each depending only on a single agent’s history of actions and observations Qa(τa, ua). This factorization allows agents to independently maximize their local utility functions in a decentralized manner with their selected actions combining to form the optimal joint action. This factored representation can only represent a limited subset of all possible value functions (Böhmer et al., 2020); however, these methods tend to perform better empirically than those that learn unfactored joint action value functions, most likely because they exploit independence properties among agents (Oliehoek et al., 2008). Sunehag et al. (2018) introduce value decomposition networks (VDN) which decompose the total Q-value as a sum of per-agent utilities: Qtot(τ ,u) := ∑ aQ
a(τa, ua). QMIX (Rashid et al., 2018) extends this approach to use a more expressive factorization. We describe QMIX and how we build our randomized factorization approach on top of it in Section 3.1.
Attention Mechanisms for MARL Attention models have recently generated intense interest due to their ability to incorporate information across large contexts, including in the MARL literature (Jiang & Lu, 2018; Iqbal & Sha, 2019; Long et al., 2020). Importantly for our purposes, they are able to process variable sized sets of fixed length vectors (in our case entities). At the core of these models is a parameterized transformation known as multi-head attention (Vaswani et al., 2017). This transformation allows entities to selectively extract information from other entities based on their local context.
We define X as a matrix where each row corresponds to an entity (either its state representation or a transformed representation of it). The global state s can be represented in matrix form as XE where Xe,∗ = se. Our models consist of entity-wise feedforward layers (denoted as eFF(X)) and multi-head attention layers (denoted as MHA (A,X,M)). Entity-wise feedforward layers apply an identical linear transformation to all input entities. Multi-head attention layers serve as a mechanism to integrate information across entities. These take in three arguments: the set of agents for which to compute an output vector A, the matrix X ∈ R|E|×d where d is the dimensionality of the input representations, and a maskM ∈ R|A|×|E|. The layer outputs a matrixH ∈ R|A|×h where h is the hidden dimension of the layer. The rowHa,∗ corresponds to a weighted sum of linearly transformed representations from all entities selected by agent a. Importantly, if the entry of the maskMa,e = 0, then entity e’s representation cannot be included in Ha,∗. Masking serves two important purposes for us: 1) It enables decentralized execution by providing the mask Mµa,e = µ(s
a, se), such that agents can only see entities observable by them in the environment, and 2) It enable us to “imagine” the returns among sub-groups of entities. We integrate entity-wise feedforward layers and multi-
head attention into QMIX in order to adapt it to settings where the number of agents and entities is variable and build our approach from there. The exact process of computing attention layers, as well as the specifics of our attention-augmented version of QMIX are described in detail in the Appendix.
3 RANDOMIZED ENTITY-WISE FACTORIZATION FOR IMAGINED LEARNING We now introduce our method, Randomized Entity-wise Factorization for Imagined Learning (REFIL). As discussed in Section 2, value function factorization approaches for cooperative deep MARL are motivated by their ability to exploit independence between agents while enabling decentralized execution with centralized training. We note that an agent’s choice of optimal actions is often independent of a subset of its observed entities (cf. soccer breakaway example from Section 1), in addition to the choice of other agents’ actions. Furthermore, we conjecture that agents robust to irrelevant entities should be more effective in dynamic environments with variable numbers of agents, as they are better able to identify shared patterns of behavior (e.g., breakaways exist in all forms of soccer). We do not know a priori which entities an agent can disregard, so we must consider all possible sub-groups of entities. As such, we propose to factor value functions by imagining returns in random sub-groups.
3.1 METHOD
QMIX (Rashid et al., 2018) relaxes the representational constraints of VDN (Sunehag et al., 2018), by allowing the joint value function Qtot to be a non-linear monotonic function with respect to the agent-specific utilitiesQa: Qtot = g ( Q1(τ1, u1; θQ), . . . , Q |A|(τ |A|, u|A|; θQ); θg ) . The parameters of the mixing function θg are generated by a hyper-network (Ha et al., 2017) conditioning on the global state s: θg = h(s; θh). Every state can therefore have a different mixing function, but the mixing’s monotonicity maintains decentralizability, as agents can maximize Qtot without communication. All parameters θ = {θQ, θh} are trained with the DQN loss of Equation 2. We extend QMIX with attention layers both to encode variable sized sets of entities observed by each per-agent utility Qa and to mix the utilities of all agents a ∈ A. Partial observability is implemented by a mask Mµae = µ(s
a, se),∀a ∈ A,∀e ∈ E that is provided to attention layers as described in section 2. Building on QMIX, for each agent we generate a separate utility that only observes the state features of agents within its randomly selected sub-group: QaI (τ a I , u
a), as well as a term that accounts for interactions outside of its group: QaO(τ a O, u
a), then mixing these 2n (2 for each agent) utilities to form Qtot. Importantly, since the mixing network is generated by the full state context, our model can weight factors contextually. For example, if agent a’s sampled sub-group contains all relevant information to compute its utility such that QaI ≈ Qa, then the mixing network can weight QaI more heavily than QaO. Otherwise, the network learns to balance Q a I and Q a O for each agent, using the full state as context, in order to estimateQtot. We train with these random factorizations in addition to the original QMIX objective. Treating factorization as auxiliary task, rather than as a representational constraint, allows our model to retain the expressivity of QMIX value functions (without sub-group partitions) while exploiting the potential independence between agents and other entities. We note that our auxiliary objective is only used in training, and execution in the environment does not use random factorization.
3.2 IMPLEMENTATION
The mechanism behind our entity-wise factorization relies on a simple attention masking procedure. In order to compute in-group utilities QaI (τ a I , u a) and out-group utilities QaO(τ a O, u
a), we first randomly partition all entities in E into two disjunct groups (held fixed for an episode), indicated by a random binary1 vector m ∈ {0, 1}|E|. The entry me determines whether entity e is in the first group, and we can take the negation ¬me to represent whether e is in the second group. The subset of all agents is denoted as mA := [ma]a∈A. From these vectors, we can construct attention masks M ∈ R|A|×|E|. For example, using the mask M1 = mAm>, would prevent agents in the first group from “seeing” outside their group sinceM1a,e = 1 only if agent a and entity e are in the same group. This can be added to a similarly produced mask M2 = ¬mA ¬m> to create MI , a mask that only allows all agents to see the entities within their distinct groups. We construct masks for agents to see within (MI ) and out of (MO) their groups, then combine with observability masks Mµ as such: MµI := M
µ∧MI ,MµO := Mµ∧MO ,withMI := mAm>∨¬mA¬m> ,MO := ¬MI . (3) 1 We first draw p ∈ (0, 1) uniformly, followed by |E| independent draws from a Bernoulli(p) distribution.
The entryMµI [a, e] determines both whether agent a can see entity e and whether entity e is in agent a’s group; the entry MµO[a, e] is the same but for entities out of a’s group. We can use these masks in our attention mechanisms to compute QaI (τ a I , u
a), which represents the predicted utility of agent a within its group and QaO(τ a O, u
a), a residual term that accounts for the utility of interactions that a would have with the other group.
Given each agent’s predicted utility factors for both in-group and out-of-group, we combine these into a Qtot such that we can use the target from the full scenario (ytott in (2)) using a mixing network as in QMIX. This network’s first layer typically takes n inputs, one for each agent. Since we have 2n factors, we simply concatenate two generated versions of the input layer (using MI and MO). We then apply the network to the concatenated utilities QaI (τ a I , u a) and QaO(τ a O, u
a) of all agents a, to compute the predicted value Qtotaux. This procedure is visualized in Figure 2 and described in more detail in the Appendix.
Our novel approach REFIL uses Qtotaux in place of Qtot in the DQN loss of (2) to get the auxiliary loss Laux. Our total loss combines both real and auxiliary losses: L := (1−λ)LQ +λLaux, where λ is a hyper-parameter. In practice, this procedure requires two additional passes through the network (with MµO and M µ I as masks instead of M
µ) per training step. These additional passes can be parallelized by computing all necessary quantities in one batch on GPU. It is feasible to split entities into an arbitrary number i of random sub-groups without using more computation by sampling several disjunct vectors mi and combining them them in the same way as we combine m and ¬m in Equation 3 to form MI and MO. Doing so could potentially bias agents towards considering smaller subsets of entities.
4 EXPERIMENTAL RESULTS
In our experiments, we aim to justify the main components of REFIL: 1) randomized sub-group factorization and 2) training as an auxiliary objective. We begin with experiments in a simple domain we construct such that agents’ decisions rely only on a subset of all entities, and that subset is known, so we can compare our approach to approaches that use this domain knowledge. Then, we move
on to testing on complex StarCraft micromanagement tasks to demonstrate our method’s ability to scale to complex domains.
4.1 GROUP MATCHING GAME
We construct a group matching game, pictured in Figure 3a, where each agent only needs to consider a subset of other agents to act effectively and we know that subset as ground-truth (unlike in more complex domains such as StarCraft). As such, the task can be described as follows: Agents (of which there are na) are randomly placed in one of nc cells and assigned to one of ng groups (represented by the different colors) at the start of each episode. They can choose from three actions: move clockwise, stay, and move counter-clockwise. Their ultimate goal is to be located in the same cell as the rest of their group members, at which point an episode ends. There is no restriction on which cell agents form a group in (e.g., both groups can form in the same cell). All agents share a reward of 2.5 when any group is completed (and an equivalent penalty for a formed group breaking) as well as a penalty of -0.1 for each time step in order to encourage agents to solve the task as quickly as possible. Agents’ entity-state descriptions se include the cell that the agent is currently occupying as well as the group it belongs to (both one-hot encoded), and the task is fully-observable. Notably, agents can act optimally while ignoring agents outside of their group.
Ground-truth knowledge of relevant entities enables us to disentangle two aspects of our approach: the use of entity-wise factorization in general and specifically using randomly selected factors. We construct two approaches that use this knowledge to build factoring masks MI and MO which are used in place of randomly sampled groups (otherwise the methods are identical to REFIL). REFIL (Fixed Oracle) directly uses the ground truth group assignments (different at each episode) to build masks. REFIL (Randomized Oracle) randomly samples sub-groups from the ground truth groups only, rather than from all possible entities. We additionally train REFIL and QMIX (Attention) (i.e., REFIL with no auxiliary loss).
Figure 3b shows that using domain knowledge alone does not significantly improve performance in this domain (QMIX (Attention) vs. REFIL (Fixed Oracle)). In fact our randomized factorization approach is able to surpass the use of domain knowledge. The randomization in REFIL appears therefore to be crucial. One hypothesis for this phenomenon is that randomization of sub-group factors enables better knowledge sharing across diverse settings (in this case unique group assignments). For example, the situation of two agents from the same group being located in adjacent cells occurs within all possible group assignments. If sampling randomly, our approach will occasionally sample these two agents alone in their own group. Even if the rest of the context in a given episode has never been seen by the model before, as long as this sub-scenario has been seen, the model has some indication of the value associated with each action. Even when restricting the set of entities to form sub-groups with to those that we know can be relevant to each agent (REFIL (Randomized Oracle)) we find that performance does not significantly improve. These results suggest that randomized sub-group formation for REFIL is a viable strategy (vs attempting to learn which entities are relevant and selecting sub-groups from there), and the main benefit of our approach is to promote generalization across scenarios by breaking value function predictions into reusable components.
4.2 STARCRAFT
We next test on the StarCraft multi-agent challenge (SMAC) (Samvelyan et al., 2019). The tasks in SMAC involve micromanagement of units in order to defeat a set of enemy units in battle. Specifically, we extend SMAC to settings with variable types and quantities of agents. We hypothesize that our approach is especially beneficial in this setting, as it should encourage of models to identify independence between entities and generalize to more diverse settings as a result. The dynamic setting requires some small modifications to SMAC, though we aim to change the environment as little as possible to maintain the challenging nature of the tasks. In the standard version of SMAC, both state and action spaces depend on a fixed number of agents and enemies, so our modifications, discussed in detail in the appendix, alleviate these problems.
In our tests we evaluate on three settings we call 3-8sz, 3-8csz, and 3-8MMM. 3-8sz pits symmetrical teams of between 3 and 8 agents against each other where the agents are a combination of Zealots and Stalkers (similar to the 2s3z and 3s5z tasks in the original SMAC). 3-8csz pits symmetrical teams of between 0 and 2 Colossi and 3 to 6 Stalkers/Zealots against each other (similar to 1c3s5z). 3-8MMM pits symmetrical teams of between 0 and 2 Medics and 3 to 6 Marines/Marauders against each other (similar to MMM and MMM2). As a sanity check, we additionally modify our approach to work with non-attention models such that we can test on the original SMAC tasks against existing methods. These results (located in the appendix) show that we can significantly improve on QMIX (previously state-of-the-art) in 2 of 3 settings tested.
Ablations and Baselines We introduce several ablations of our method, as well as adaptations of existing methods to handle variable sized inputs. These comparisons are summarized in Table 1. QMIX (Attention) is our method without the auxiliary loss. REFIL (VDN) is our approach using summation to combine all factors (a la Value Decomposition Networks (Sunehag et al., 2018)) rather than a non-linear monotonic mixing network. VDN (Attention) does not include the auxiliary loss and uses summation as factor mixing. QMIX (Mean Pooling) is QMIX (Attention) with attention layers replaced by mean pooling. We also test max
pooling but find the performance to be marginally worse than mean pooling. Importantly, for pooling layers we add entity-wise linear transformations prior to the pooling operations such that the total number of parameters is comparable to attention layers.
For baselines we consider some follow-up works to QMIX that attempt to improve the mixing network to be more expressive: QTRAN (Son et al., 2019) and Qatten (Yang et al., 2020). We additionally consider an alternative mechanism for aggregating information across variable sets of entities, known as Entity Message Passing (EMP) (Agarwal et al., 2019). We specifically use the restricted communication setting where agents can only communicate with agents they observe, and we set the number of message passing steps to 3. Finally, we compare to a method that builds on QMIX by attempting to learn dynamic roles that depend on the context each agent observes: ROMA (Wang et al., 2020a). For all approaches designed for the standard SMAC setting, we extend them with the same multi-head attention architecture that our approach uses.
Results and Discussion Our results on challenges in dynamic STARCRAFT settings can be found in Figure 4. We find that REFIL outperforms all ablations consistently in these settings. REFIL (VDN) performs much worse than our approach and VDN (Attention), highlighting the importance of the mixing network to handle contextual dependencies between entity partitions. Since the trajectory of a subset of entities can play out differently based on the surrounding context, it’s important for our factorization approach to recognize and adjust for these situations. The mixing network handles these dependencies by a) incorporating global state information into the mixing procedure, and b) mixing utilities in a non-linear monotonic fashion, rather than summing as in VDN. As such, the increased representative capacity of the QMIX mixing network, relative to VDN, is crucial. The use of mean-pooling in place of attention also performs poorly, indicating that attention is valuable for aggregating information from variable length sets of entities.
With respect to the baselines, we also find that REFIL consistently outperforms other methods, highlighting the unique challenge of learning in such dynamic settings where entity types are variable at each episode. The improvements that ROMA, Qatten, and QTRAN seen in other settings over QMIX, do not appear to manifest themselves in this setting. Moreover, the entity aggregation method of EMP does not improve performance over the standard MHA module that we use, likely due to the fact that EMP is most effective in settings where partial observability is a major hindrance to successful task completion. In this way, the target of EMP and REFIL are
opposite, as the goal of REFIL is to ignore extraneous information when possible during training to improve knowledge transfer.
In order to understand the role of training as an auxiliary objective (rather than entirely replacing the objective) we vary the value of λ to interpolate between two modes: λ = 0 is simply QMIX (Attention), while λ = 1 trains exclusively with random factorization. Our results (Figure 5) show that, similar to regularization methods such as Dropout (Srivastava et al., 2014), there is a sweet spot where performance is maximized before collapsing catastrophically. Training exclusively with random factorization does not learn anything significant. This failure is likely due to the fact that we use the full context in our targets for learning with imagined scenarios as well as when executing our policies, so we still need to learn with it in training.
Finally, we consider a qualitative experiment to highlight the sort of common patterns that REFIL is able to leverage (Figure 6). Zealots (the only melee unit present) are weak to Colossi, so they learn to hang back and let other units engage first. Then, they jump in and intercept the enemy Zealots while all other enemy units are preoccupied, leading to a common pattern of a Zealot vs. Zealot skirmish (highlighted at t=15). REFIL enables behaviors learned in these types of sub-groups to be applied more effectively across all unique unit type configurations. By sampling groups from all entities randomly, we will occasionally end up with sub-groups that include only Zealots, and the value function predictions learned in these sub-groups can be applied not only to the episode at hand, but to any episode where a similar pattern emerges.
5 RELATED WORK
Multi-agent reinforcement learning (MARL) is a broad field encompassing cooperative (Foerster et al., 2018; Rashid et al., 2018; Sunehag et al., 2018), competitive (Bansal et al., 2018; Lanctot
et al., 2017), and mixed (Lowe et al., 2017; Iqbal & Sha, 2019) settings. This paper focuses on cooperative MARL with centralized training and decentralized execution (Oliehoek et al., 2016, CTDE). Our approach utilizes value function factorization, an approach aiming to simultaneously overcome limitations of both joint Hausknecht (2016) and independent learning Claus & Boutilier (1998) paradigms. Early attempts at value function factorisation require apriori knowledge of suitable per-agent team reward decompositions or interaction dependencies. These include optimising over local compositions of individual Q-value functions learnt from individual reward functions (Schneider et al., 1999), as well as summing individual Q-functions with individual rewards before greedy joint action selection (Russell & Zimdars, 2003). Guestrin et al. (2002); Kok & Vlassis (2006) factorise the total Q-value function using coordination graphs based on interaction dependencies for the task at hand, similarly to max-plus approaches Kuyer et al. (2008); Pol & Oliehoek (2016). Recent approaches from cooperative deep multi-agent RL allow for value factorisations to be learnt from experience from a single team reward function and no prior knowledge of interaction dependencies. Value-Decomposition Networks (VDN) (Sunehag et al., 2018) decompose the joint Q-value function into a sum of local utility functions used for greedy action selection. QMIX Rashid et al. (2018) extends such additive decompositions to general monotonic functions. Several works extend QMIX to improve the expressivity of mixing functions (Son et al., 2019; Yang et al., 2020), learn latent embeddings to help exploration (Mahajan et al., 2019) or learn dynamic roles (Wang et al., 2020a), and encode knowledge of action semantics into network architectures (Wang et al., 2020b).
Several recent works have addressed the topic of generalization and transfer across environments with varying agent quantities, though the learning paradigms considered and assumptions made differ from our approach. Carion et al. (2019) devise an approach for assigning agents to tasks, assuming the existence of low-level controllers to carry out the tasks, and show that it can scale to much larger scenarios than those seen in training. Burden (2020) propose a transfer learning approach using convolutional neural networks and grid-based state representations to scale to scenarios of arbitrary size. Agarwal et al. (2019) introduce an entity message passing framework to enable agents to attend to specific entities, of which there may be a variable amount, based on their local context, similar to the multi-head attention module we use in our approach. Several approaches devise attention or graph-neural-network based models for handling variable sized inputs and focus on learning curricula to progress on increasingly large/challenging settings (Long et al., 2020; Baker et al., 2019; Wang et al., 2020c). In contrast to these curriculum learning approaches, we focus on training simultaneously on scenarios of varying sizes and specifically focus on developing a training paradigm for improving knowledge sharing across such settings to accelerate learning.
6 CONCLUSION
In this paper we consider a MARL setting where we aim to learn policies to control teams of agents in scenarios with varying types and quantities of entities. We propose REFIL, an approach that regularizes value functions to identify independence relationships between entities, in turn promoting generalization and knowledge transfer within and across multi-agent settings with varying quantities of agents. Our results show that our contributions yield performance improvements in complex cooperative tasks. In future work, we hope to explore alternative methods for learning independence relationships between entities beyond randomized partitions.
A ATTENTION LAYERS AND MODELS
Attention models have recently generated intense interest due to their ability to incorporate information across large contexts. Importantly for our purposes, they are able to process variable sized sets of inputs.
We now formally define the building blocks of our attention models. Given the input X , a matrix where the rows correspond to entities, we define an entity-wise feedforward layer as a standard fully connected layer that operates independently and identically over entities:
eFF(X;W , b) = XW + b>,X ∈ Rnx×d,W ∈ Rd×h, b ∈ Rh (4) Now, we specify the operation that defines an attention head, given the additional inputs of S ⊆ Z[1,nx], a set of indices that selects which rows of the input X are used to compute queries such that XS ∈ R|S|×d, and M , a binary obserability mask specifying which entities each query entity can observe (i.e. Mi,j = 1 when i ∈ S can incorporate information from j ∈ Z[1,n
x] into its local context):
Atten(S,X,M ;WQ,WK ,W V ) = softmax ( mask ( QK>√
h ,M
)) V ∈ R|S|×h
(5)
Q = XS,∗W Q,K = XWK ,V = XW V , M ∈ {0, 1}|S|×nx ,WQ,WK ,W V ∈ Rd×h
(6)
The mask(Y ,M) operation takes two equal sized matrices and fills the entries of Y with −∞ in the indices whereM is equal to 0. After the softmax, these entries become zero, thus preventing the attention mechanism from attending to specific entities. This masking procedure is used in our case to uphold partial observability. Only one attention layer is permitted in the decentralized execution setting; otherwise information from unseen agents can be propagated through agents that are seen. WQ, WK , and W V are all learnable parameters of this layer. Queries, Q, can be thought of as vectors specifying the type of information that an entity would like to select from others, while keys, K, can be thought of as specifying the type of information that an entity possesses, and finally, values, V , hold the information that is actually shared with other entities.
We define multi-head-attention as the parallel computation of attention heads as such: MHA (S,X,M) = concat ( Atten ( S,X,M ;WQj ,WKj ,W Vj ) , j ∈ ( 1 . . . nh )) (7)
The size of the parameters of an attention layer does not depend on the number of input entities. Furthermore, we receive an output vector for each query vector.
B AUGMENTING QMIX WITH ATTENTION
The standard QMIX algorithm relies on a fixed number of entities in three places: inputs of the agent-specific utility functions Qa, inputs of the hypernetwork, and the number of utilities entering the mixing network, that is, the output of the hypernetwork. QMIX uses multi-layer perceptrons for which all these quantities have to be of fixed size. In order to adapt QMIX to the variable agent quantity setting, such that we can apply a single model across all episodes, we require components that accept variable sized sets of entities as inputs. By utilizing attention mechanisms, we can design components that are no longer dependent on a fixed number of entities taken as input. We define the following inputs: XEei := s e i , 1 ≤ i ≤ d, e ∈ E ;Mµae := µ(sa, se), a ∈ A, e ∈ E . The matrixXE is the global state s reshaped into a matrix with a row for each entity, andMµ is a binary observability matrix which enables decentralized execution, determining which entities are visible to each agent.
B.1 UTILITY NETWORKS
While the standard agent utility functions map a flat observation, whose size depends on the number of entities in the environment, to a utility for each action, our attention-utility functions can take in a variable sized set of entities and return a utility for each action. The attention layer output for agent a is computed as MHA ({a},X,Mµ), where X is an row-wise transformation of XE (e.g.,
an entity-wise feedforward layer). If agents share parameters, the layer can be computed in parallel for all agents by providing A instead of {a}, which we do in practice.
B.2 GENERATING DYNAMIC SIZED MIXING NETWORKS
Another challenge in devising a QMIX algorithm for variable agent quantities is to adapt the hypernetworks that generate weights for the mixing network. Since the mixing network takes in utilities from each agent, we must generate feedforward mixing network parameters that change in size depending on the number of agents present, while incorporating global state information. Conveniently, the number of output vectors of a MHA layer depends on the cardinality of input set S and we can therefore generate mixing parameters of the correct size by using S = A and concatenating the vectors to form a matrix with one dimension size depending on the number of agents and the other depending on the number of hidden dimensions. Attention-based QMIX (QMIX (Attention)) trains these models using the standard DQN loss in Equation 2.
Our two layer mixing network requires the following parameters to be generated: W1 ∈ R+(|A|×hm), b1 ∈ Rh m , w2 ∈ R+(h m), b2 ∈ R, where hm is the hidden dimension of the mixing network and |A| is the set of agents. Note from Eq. (5) that the output size of the layer is dependent on the size of the query set. As such, using attention layers, we can generate a matrix of size |A| × hm, by specifying the set of agents, A, as the set of queries S from Eq. (5). We do not need observability masking since hypernetworks are only used during training and can be fully centralized. For each of the four components of the mixing network (W1, b1,w2, b2), we introduce a hypernetwork that generates parameters of the correct size. Thus, for the parameters that are vectors (b1 andw2), we average the matrix generated by the attention layer across the |A| sized dimension, and for b2, we average all elements. This procedure enables the dynamic generation of mixing networks whose input size varies with the number of agents. Assuming q = [Q1(τ1, u1), . . . , Qn(τn, un)], then Qtot is computed as:
Qtot(s, τ,u) = σ((q>W1) + b > 1 )w2 + b2 (8)
where σ is an ELU nonlinearity (Clevert et al., 2015).
C ENVIRONMENT DETAILS
C.1 STARCRAFT WITH VARIABLE AGENTS AND ENEMIES
The standard version of SMAC loads map files with pre-defined and fixed unit types, where the global state and observations are flat vectors with segments corresponding to each agent and enemy. Partial observability is implemented by zeroing out segments of the observations corresponding to unobserved agents. The size of these vectors changes depending on the number of agents placed in the map file. Furthermore, the action space consists of movement actions as well as separate actions to attack each enemy unit. As such the action space also changes as the number of agents changes.
Our version loads empty map files and programmatically generates agents, allowing greater flexibility in terms of the units present to begin each episode. The global state is split into a list of equal-sized entity descriptor vectors (for both agents and enemies), and partial observability is handled by generating a matrix that shows what entities are visible to each agent. The variable-sized action space is handled by randomly assigning each enemy a tag at the beginning of each episode and designating an action to attack each possible tag, of which there are a maximum number (i.e. the maximum possible number of enemies across all initializations). Agents are able to see the tag of the enemies they observe and can select the appropriate action that matches this tag in order to attack a specific enemy.
D EXPERIMENTAL DETAILS
Our experiments were performed on a desktop machine with a 6-core Intel Core i7-6800K CPU and 3 NVIDIA Titan Xp GPUs, and a server with 2 16-core Intel Xeon Gold 6154 CPUs and 10 NVIDIA Titan Xp GPUs. Each experiment is run with 8 parallel environments for data collection and a single GPU. REFIL takes about 24 hours to run for 10M steps on STARCRAFT. QMIX (Attention) takes
about 16 hours for the same number of steps on STARCRAFT. Reported times are on the desktop machine and the server runs approximately 15% faster due to more cores being available for running the environments in parallel.
E HYPERPARAMETERS
Hyperparameters were based on the PyMARL (Samvelyan et al., 2019) implementation of QMIX and are listed in Table 2. All hyperparameters are the same in all STARCRAFT settings. Since we train for 10 million timesteps (as opposed to the typical 2 million in standard SMAC), we extend the epsilon annealing period (for epsilon-greedy exploration) from 50,000 steps to 500,000 steps. For hyperparameters new to our approach (hidden dimensions of attention layers, number of attention heads, λ weighting of imagined loss), the specified values in Table 2 were the first values tried, and we found them to work well. The robustness of our approach to hyperparameter settings, as well as the fact that we do not tune hyperparameters per environment, is a strong indicator of the general applicability of our method.
F ADDITIONAL RESULTS
We test a modified non-attention version of our approach along with state of the art methods on the standard version of SMAC, where entity types are constant at the start of each episode. Since the number and type of agents and enemies is constant at each episode, observations and states can be represented as fixed-size vectors. We can thus use MLPs as models (as is standard in the literature) for these tasks and adapt our approach to suit this setting while comparing to unmodified versions of existing approaches. Rather than masking an attention mechanism, we simply zero out the features in the observations and states that correspond to entities we would like to mask out. These experiments are performed in order to compare our approach to results validated in the literature.
We compare against QMIX (Rashid et al., 2018) and VDN (Sunehag et al., 2018), as well as an ablation of our approach that uses additive mixing (a la VDN) of entity partition factors instead of a mixing network which we call REFIL (VDN). We use the architectures and hyperparameters from the QMIX (Rashid et al., 2018) paper in these settings.
Results can be found in Figure 7. While we expect REFIL to be most effective in the setting of varying types and quantities of agents, we still find that it improves on QMIX in 2 of the 3 scenarios tested. In the standard SMAC benchmarks, we find our approach is able to match or outperform the best baseline across all settings. Specifically, our factorization method (which builds on QMIX) improves the performance of QMIX in 2 of 3 settings tested. As far as we are aware REFIL outperforms all reported results in the literature on “2c vs 64 zg” (clasified as a “hard” task in SMAC). The relative improvement over QMIX, combined with the fact that it does not ever appear to hurt performance, indicates that the benefits of our method are not limited to settings with varying types and quantities of agents, though the positive effects are more pronounced there. | 1. What is the main contribution of the paper regarding observation factorization?
2. What are the strengths and weaknesses of the proposed entity-wise attention network with a masking procedure?
3. Do you have any concerns or questions about the experimental results and comparisons with other works?
4. How does the reviewer assess the novelty and relevance of the paper's content in relation to previous research, such as ROMA and ASN?
5. Are there any suggestions for improving the paper, such as providing more specific examples or including more recent baseline methods? | Review | Review
This paper proposes an observation factorization method to avoid the influence of the irrelevant part on value estimation. Specifically, they design an entity-wise attention network with a masking procedure. This network is used to filter the irrelevant part of the original observation of each agent. Then the output is used to estimate the individual q-value, as well as input to the mixing network to generate the Q_tot. Two kinds of Q_tot are trained together by combing two loss functions linearly with a hyper-parameter. Experimental results show REFIL combined with QMIX surpasses vanilla QMIX and VDN in several SMAC scenarios.
This paper is related to the topics of ICLR. However, I think the related work is not sufficient to cover the background. More detail comments can be found below.
Some specific comments:
It is not clear that what is the initialization of two masks, and how to update the masks.
The authors mentioned there are two groups of entities. However, the entity type is also unclear. I guess SMAC only contains two entity types: alive agents and died agents? How to represent an entity inactive?
One question is why just consider two kinds of groups, what would happen if there exist more than two groups for all entities. In SMAC or soccer, it does contain more than two common patterns. Furthermore, it seems that the masking procedure is hard to extend to the situation with a larger number of groups.
Actually, I think REFIL is similar to ROMA [1] and ASN [2] in different ways. First, REFIL considers two kinds of groups corresponding to a simple version of ROMA which has two roles. Second, REFIL does the same thing as ASN that learns the value estimation by considering a more useful part of the observation. ASN directly divide the observation based on the action semantics, while REFIL tries to learn a suitable observation factorization through entity-wise attention with masking. However, these two very relevant works are not discussed and compared in this paper.
Some suggestions,
I think current experiments could not well support motivation. If authors show some examples in SMAC that what kinds of common patterns agents learn would be better to support this idea.
Since REFIL can be integrated into current MARL algorithms, it is better to consider more recent published MARL methods as baselines, such as QTRAN, QATTEN, QPLEX.
[1] Roma: Multi-agent reinforcement learning with emergent roles. ICML. 2020.
[2] Action Semantics Network: Considering the Effects of Actions in Multiagent Systems. ICLR. 2020. |
ICLR | Title
Randomized Entity-wise Factorization for Multi-Agent Reinforcement Learning
Abstract
Real world multi-agent tasks often involve varying types and quantities of agents and non-agent entities; however, agents within these tasks rarely need to consider all others at all times in order to act effectively. Factored value function approaches have historically leveraged such independences to improve learning efficiency, but these approaches typically rely on domain knowledge to select fixed subsets of state features to include in each factor. We propose to utilize value function factoring with random subsets of entities in each factor as an auxiliary objective in order to disentangle value predictions from irrelevant entities. This factoring approach is instantiated through a simple attention mechanism masking procedure. We hypothesize that such an approach helps agents learn more effectively in multi-agent settings by discovering common trajectories across episodes within sub-groups of agents/entities. Our approach, Randomized Entity-wise Factorization for Imagined Learning (REFIL), outperforms all strong baselines by a significant margin in challenging StarCraft micromanagement tasks.
N/A
1 INTRODUCTION
Many real-world multi-agent tasks contain scenarios in which an agent must deal with varying numbers and/or types of cooperative agents, antagonist enemies or other entities. Agents, however, can often select their optimal actions while ignoring a subset of agents/entities. For example, in the sport of soccer, a “breakaway” occurs when an attacker with the ball passes the defense and only needs to beat the goalkeeper in order to score (see Figure 1). In this situation, only the opposing goalkeeper is immediately relevant to the attacker’s success, so the attacker can safely ignore players other than the goalkeeper for the time being. By ignoring irrelevant context, the attacker can generalize this experience better to its next breakaway. Furthermore, soccer takes many forms, from casual 5 vs. 5 to full scale 11 vs. 11 matches, and breakaways occur in all. If agents can identify independent patterns of
behavior such as breakaways, they should be able to learn more efficiently as well as share their experiences across all forms of soccer.
Value function factoring approaches attempt to leverage independences between agents, such as those in our soccer example, by learning value functions as a combination of independent factors that depend on disjunct subsets of the state and action spaces (Koller & Parr, 1999). These subsets are typically fixed in advance using domain knowledge about the problem at hand, and thus are not scalable to complex domains where dependencies are unknown and may shift over time. Recent approaches in cooperative deep multi-agent reinforcement learning (MARL) factor value functions into separate components for each agent’s action and observation space in order to enable decentralized execution (e.g., VDN (Sunehag et al., 2018), QMIX (Rashid et al., 2018)). These approaches learn a utility function for each agent that only depends on the agent’s own action and its observations. The global Q-value is then predicted as some monotonic combination of these utilities in order to allow agents to greedily select their actions with local information while maximizing the
global Q. These approaches are able to effectively leverage independence between agents’ local actions and observations, however, we note that observable entities are provided by the environment and are not all necessarily relevant to an agent’s value function.
We build on these recent approaches by additionally factoring the observation space of each agent into factors for sub-groups of observed entities. Unlike classic works which factor the state or observation spaces, our work does not depend on fixed subsets of features designated through domain knowledge. Instead, we propose to randomly select sub-groups of observed entities and “imagine” the predicted utilities within these groups for each agent. These terms will not account for potential interactions outside of the groups, so we include additional factors that estimate the effect of the entities outside of each sub-group on each agent’s utility. In order to estimate the true returns, we combine all factors using a mixing network (as in QMIX, Rashid et al., 2018), which allows our model to weight factors based on the full state context. We hypothesize this approach is beneficial for two reasons: 1) randomly partitioning entities and predicting returns from disjunct factors allows our model to explore all possible independence relationships among agents and entities, teaching agents to ignore irrelevant context when possible and 2) by teaching our models when they can ignore irrelevant context, they will learn more efficiently across varied settings that share common patterns of behavior, such as breakaways in soccer. The loss for training randomized factorization is added to the QMIX loss (i.e., using full observations) as an auxiliary objective. Our reasoning is again twofold: 1) we must learn the true returns to use as a target prediction for a Q-learning loss. 2) we do not know a priori which entities are unnecessary and thus need to learn policies that act on full observations.
Our entity-wise factoring procedure can be implemented easily in practice by using a simple masking procedure in attention-based models. Furthermore, by leveraging attention models, we can apply our approach to domains with varying entity quantities. Just as a soccer agent experiencing a breakaway can generalize their behavior across settings (5 vs. 5, 11 vs. 11, etc.) if they ignore irrelevant context, we hypothesize that our approach will improve performance across settings with variable agent and entity configurations. We propose Randomized Entity-wise Factorization for Imagined Learning (REFIL) and test on complex StarCraft Multi-Agent Challenge (SMAC) (Samvelyan et al., 2019) tasks with varying agent types and quantities, finding it attains improved performance over state-of-the-art methods.
2 BACKGROUND AND PRELIMINARIES
In this work, we consider the decentralized partially observable Markov decision process (DecPOMDP) (Oliehoek et al., 2016), which describes fully cooperative multi-agent tasks. Specifically, we utilize the setting of Dec-POMDPs with entities (Schroeder de Witt et al., 2019).
Dec-POMDPs with Entities are described as tuples: (S,U,O,P , r , E ,A,Φ, µ). E is the set of entities in the environment. Each entity e has a state representation se, and the global state is the set s = {se|e ∈ E} ∈ S. Some entities can be agents a ∈ A ⊆ E . Non-agent entities are parts of the environment that are not controlled by learning policies (e.g., landmarks, obstacles, agents with fixed behavior). The state features of each entity comprise of two parts: se = [fe, φe] where fe represents the description of an entity’s current state (e.g., position, orientation, velocity, etc.) while φe ∈ Φ represents the entity’s type (e.g., outfield player, goalkeeper, etc.), of which there are a discrete set. An entity’s type affects the state dynamics as well as the reward function and, importantly, it remains fixed for the duration of the entity’s existence. Not all entities may be visible to each agent, so we define a binary observability mask: µ(sa, se) ∈ {1, 0}, where agents can always observe themselves µ(sa, sa) = 1,∀a ∈ A. Thus, an agent’s observation is defined as oa = {se|µ(sa, se) = 1, e ∈ E} ∈ O. Each agent a can execute actions ua, and the joint action of all agents is denoted as u = {ua|a ∈ A} ∈ U. P is the state transition function which defines the probability P(s′|s,u). r(s,u) is the reward function which maps the global state and joint actions to a single scalar reward.
We do not consider entities being added during an episode, but they may become inactive (e.g., a unit dying in StarCraft) in which case they no longer affect transitions and rewards. Since s and u are sets, their ordering does not matter, and our modeling construct should account for this (e.g., by modeling with permutation invariance/equivariance (Lee et al., 2019)). In many domains, the set of entity types present {φe|e ∈ E} is fixed across episodes. We are particularly interested in cases where quantity and types of entities are varied between episodes, as identifying independence relationships between entities is crucial to generalizing experience effectively in these cases.
Learning for Dec-POMDPs We aim to learn a set of policies that maximize expected discounted reward (returns) in some MDP.Q-learning is specifically concerned with learning an accurate actionvalue function Qtot (defined below), and using this function to select the actions that maximize expected returns. The optimal Q-function for the Dec-POMDP setting is defined as:
Qtot(s,u) := E [ ∞∑ t=0 γt r(st,ut) ∣∣∣ s0=s, u0=u, st+1∼P (·|st,ut) ut+1=arg maxQ tot(st+1,·) ] = r(s,u) + γ E [ maxQtot(s′, ·) | s′∼P (·|s,u) ] . (1)
Partial observability is typically handled by using the history of actions and observations as a proxy for state, typically processed by a recurrent neural network (RNN, Hausknecht & Stone, 2015): Qtotθ (τt,ut) ≈ Qtot(st,ut), where the trajectory (i.e., action observation history) is τat := (oa0 , u a 0 , . . . , o a t ) and τt := {τat }a∈A.
Work in deep reinforcement learning (Mnih et al., 2015) has popularized the use of neural networks as function approximators for learning Q-functions that are trained by minimizing the loss function:
L(θ) := E [( rt + γQ tot θ̄ ( τt+1, arg maxQ tot θ (τt+1, ·) )︸ ︷︷ ︸ ytott −Qtotθ (τt,ut) )2∣∣∣ (τt,ut, rt, τt+1) ∼ D] , (2) where θ̄ are the parameters of a target network that is copied from θ periodically to improve stability (Mnih et al., 2015) and D is a replay buffer (Lin, 1992) that stores transitions collected by an exploratory policy (typically -greedy). Double deep Q-learning (van Hasselt et al., 2016) mitigates overestimation of the learned values by using actions that maximize Qtotθ for the target network Q tot θ̄ .
Value Function Factorization Centralized training for decentralized execution (CTDE) has been a major focus in recent efforts in deep multi-agent RL (Lowe et al., 2017; Foerster et al., 2018; Sunehag et al., 2018; Rashid et al., 2018; Iqbal & Sha, 2019). Some work achieves CTDE by introducing methods for factoring Q-functions into monotonic combinations of per-agent utilities, with each depending only on a single agent’s history of actions and observations Qa(τa, ua). This factorization allows agents to independently maximize their local utility functions in a decentralized manner with their selected actions combining to form the optimal joint action. This factored representation can only represent a limited subset of all possible value functions (Böhmer et al., 2020); however, these methods tend to perform better empirically than those that learn unfactored joint action value functions, most likely because they exploit independence properties among agents (Oliehoek et al., 2008). Sunehag et al. (2018) introduce value decomposition networks (VDN) which decompose the total Q-value as a sum of per-agent utilities: Qtot(τ ,u) := ∑ aQ
a(τa, ua). QMIX (Rashid et al., 2018) extends this approach to use a more expressive factorization. We describe QMIX and how we build our randomized factorization approach on top of it in Section 3.1.
Attention Mechanisms for MARL Attention models have recently generated intense interest due to their ability to incorporate information across large contexts, including in the MARL literature (Jiang & Lu, 2018; Iqbal & Sha, 2019; Long et al., 2020). Importantly for our purposes, they are able to process variable sized sets of fixed length vectors (in our case entities). At the core of these models is a parameterized transformation known as multi-head attention (Vaswani et al., 2017). This transformation allows entities to selectively extract information from other entities based on their local context.
We define X as a matrix where each row corresponds to an entity (either its state representation or a transformed representation of it). The global state s can be represented in matrix form as XE where Xe,∗ = se. Our models consist of entity-wise feedforward layers (denoted as eFF(X)) and multi-head attention layers (denoted as MHA (A,X,M)). Entity-wise feedforward layers apply an identical linear transformation to all input entities. Multi-head attention layers serve as a mechanism to integrate information across entities. These take in three arguments: the set of agents for which to compute an output vector A, the matrix X ∈ R|E|×d where d is the dimensionality of the input representations, and a maskM ∈ R|A|×|E|. The layer outputs a matrixH ∈ R|A|×h where h is the hidden dimension of the layer. The rowHa,∗ corresponds to a weighted sum of linearly transformed representations from all entities selected by agent a. Importantly, if the entry of the maskMa,e = 0, then entity e’s representation cannot be included in Ha,∗. Masking serves two important purposes for us: 1) It enables decentralized execution by providing the mask Mµa,e = µ(s
a, se), such that agents can only see entities observable by them in the environment, and 2) It enable us to “imagine” the returns among sub-groups of entities. We integrate entity-wise feedforward layers and multi-
head attention into QMIX in order to adapt it to settings where the number of agents and entities is variable and build our approach from there. The exact process of computing attention layers, as well as the specifics of our attention-augmented version of QMIX are described in detail in the Appendix.
3 RANDOMIZED ENTITY-WISE FACTORIZATION FOR IMAGINED LEARNING We now introduce our method, Randomized Entity-wise Factorization for Imagined Learning (REFIL). As discussed in Section 2, value function factorization approaches for cooperative deep MARL are motivated by their ability to exploit independence between agents while enabling decentralized execution with centralized training. We note that an agent’s choice of optimal actions is often independent of a subset of its observed entities (cf. soccer breakaway example from Section 1), in addition to the choice of other agents’ actions. Furthermore, we conjecture that agents robust to irrelevant entities should be more effective in dynamic environments with variable numbers of agents, as they are better able to identify shared patterns of behavior (e.g., breakaways exist in all forms of soccer). We do not know a priori which entities an agent can disregard, so we must consider all possible sub-groups of entities. As such, we propose to factor value functions by imagining returns in random sub-groups.
3.1 METHOD
QMIX (Rashid et al., 2018) relaxes the representational constraints of VDN (Sunehag et al., 2018), by allowing the joint value function Qtot to be a non-linear monotonic function with respect to the agent-specific utilitiesQa: Qtot = g ( Q1(τ1, u1; θQ), . . . , Q |A|(τ |A|, u|A|; θQ); θg ) . The parameters of the mixing function θg are generated by a hyper-network (Ha et al., 2017) conditioning on the global state s: θg = h(s; θh). Every state can therefore have a different mixing function, but the mixing’s monotonicity maintains decentralizability, as agents can maximize Qtot without communication. All parameters θ = {θQ, θh} are trained with the DQN loss of Equation 2. We extend QMIX with attention layers both to encode variable sized sets of entities observed by each per-agent utility Qa and to mix the utilities of all agents a ∈ A. Partial observability is implemented by a mask Mµae = µ(s
a, se),∀a ∈ A,∀e ∈ E that is provided to attention layers as described in section 2. Building on QMIX, for each agent we generate a separate utility that only observes the state features of agents within its randomly selected sub-group: QaI (τ a I , u
a), as well as a term that accounts for interactions outside of its group: QaO(τ a O, u
a), then mixing these 2n (2 for each agent) utilities to form Qtot. Importantly, since the mixing network is generated by the full state context, our model can weight factors contextually. For example, if agent a’s sampled sub-group contains all relevant information to compute its utility such that QaI ≈ Qa, then the mixing network can weight QaI more heavily than QaO. Otherwise, the network learns to balance Q a I and Q a O for each agent, using the full state as context, in order to estimateQtot. We train with these random factorizations in addition to the original QMIX objective. Treating factorization as auxiliary task, rather than as a representational constraint, allows our model to retain the expressivity of QMIX value functions (without sub-group partitions) while exploiting the potential independence between agents and other entities. We note that our auxiliary objective is only used in training, and execution in the environment does not use random factorization.
3.2 IMPLEMENTATION
The mechanism behind our entity-wise factorization relies on a simple attention masking procedure. In order to compute in-group utilities QaI (τ a I , u a) and out-group utilities QaO(τ a O, u
a), we first randomly partition all entities in E into two disjunct groups (held fixed for an episode), indicated by a random binary1 vector m ∈ {0, 1}|E|. The entry me determines whether entity e is in the first group, and we can take the negation ¬me to represent whether e is in the second group. The subset of all agents is denoted as mA := [ma]a∈A. From these vectors, we can construct attention masks M ∈ R|A|×|E|. For example, using the mask M1 = mAm>, would prevent agents in the first group from “seeing” outside their group sinceM1a,e = 1 only if agent a and entity e are in the same group. This can be added to a similarly produced mask M2 = ¬mA ¬m> to create MI , a mask that only allows all agents to see the entities within their distinct groups. We construct masks for agents to see within (MI ) and out of (MO) their groups, then combine with observability masks Mµ as such: MµI := M
µ∧MI ,MµO := Mµ∧MO ,withMI := mAm>∨¬mA¬m> ,MO := ¬MI . (3) 1 We first draw p ∈ (0, 1) uniformly, followed by |E| independent draws from a Bernoulli(p) distribution.
The entryMµI [a, e] determines both whether agent a can see entity e and whether entity e is in agent a’s group; the entry MµO[a, e] is the same but for entities out of a’s group. We can use these masks in our attention mechanisms to compute QaI (τ a I , u
a), which represents the predicted utility of agent a within its group and QaO(τ a O, u
a), a residual term that accounts for the utility of interactions that a would have with the other group.
Given each agent’s predicted utility factors for both in-group and out-of-group, we combine these into a Qtot such that we can use the target from the full scenario (ytott in (2)) using a mixing network as in QMIX. This network’s first layer typically takes n inputs, one for each agent. Since we have 2n factors, we simply concatenate two generated versions of the input layer (using MI and MO). We then apply the network to the concatenated utilities QaI (τ a I , u a) and QaO(τ a O, u
a) of all agents a, to compute the predicted value Qtotaux. This procedure is visualized in Figure 2 and described in more detail in the Appendix.
Our novel approach REFIL uses Qtotaux in place of Qtot in the DQN loss of (2) to get the auxiliary loss Laux. Our total loss combines both real and auxiliary losses: L := (1−λ)LQ +λLaux, where λ is a hyper-parameter. In practice, this procedure requires two additional passes through the network (with MµO and M µ I as masks instead of M
µ) per training step. These additional passes can be parallelized by computing all necessary quantities in one batch on GPU. It is feasible to split entities into an arbitrary number i of random sub-groups without using more computation by sampling several disjunct vectors mi and combining them them in the same way as we combine m and ¬m in Equation 3 to form MI and MO. Doing so could potentially bias agents towards considering smaller subsets of entities.
4 EXPERIMENTAL RESULTS
In our experiments, we aim to justify the main components of REFIL: 1) randomized sub-group factorization and 2) training as an auxiliary objective. We begin with experiments in a simple domain we construct such that agents’ decisions rely only on a subset of all entities, and that subset is known, so we can compare our approach to approaches that use this domain knowledge. Then, we move
on to testing on complex StarCraft micromanagement tasks to demonstrate our method’s ability to scale to complex domains.
4.1 GROUP MATCHING GAME
We construct a group matching game, pictured in Figure 3a, where each agent only needs to consider a subset of other agents to act effectively and we know that subset as ground-truth (unlike in more complex domains such as StarCraft). As such, the task can be described as follows: Agents (of which there are na) are randomly placed in one of nc cells and assigned to one of ng groups (represented by the different colors) at the start of each episode. They can choose from three actions: move clockwise, stay, and move counter-clockwise. Their ultimate goal is to be located in the same cell as the rest of their group members, at which point an episode ends. There is no restriction on which cell agents form a group in (e.g., both groups can form in the same cell). All agents share a reward of 2.5 when any group is completed (and an equivalent penalty for a formed group breaking) as well as a penalty of -0.1 for each time step in order to encourage agents to solve the task as quickly as possible. Agents’ entity-state descriptions se include the cell that the agent is currently occupying as well as the group it belongs to (both one-hot encoded), and the task is fully-observable. Notably, agents can act optimally while ignoring agents outside of their group.
Ground-truth knowledge of relevant entities enables us to disentangle two aspects of our approach: the use of entity-wise factorization in general and specifically using randomly selected factors. We construct two approaches that use this knowledge to build factoring masks MI and MO which are used in place of randomly sampled groups (otherwise the methods are identical to REFIL). REFIL (Fixed Oracle) directly uses the ground truth group assignments (different at each episode) to build masks. REFIL (Randomized Oracle) randomly samples sub-groups from the ground truth groups only, rather than from all possible entities. We additionally train REFIL and QMIX (Attention) (i.e., REFIL with no auxiliary loss).
Figure 3b shows that using domain knowledge alone does not significantly improve performance in this domain (QMIX (Attention) vs. REFIL (Fixed Oracle)). In fact our randomized factorization approach is able to surpass the use of domain knowledge. The randomization in REFIL appears therefore to be crucial. One hypothesis for this phenomenon is that randomization of sub-group factors enables better knowledge sharing across diverse settings (in this case unique group assignments). For example, the situation of two agents from the same group being located in adjacent cells occurs within all possible group assignments. If sampling randomly, our approach will occasionally sample these two agents alone in their own group. Even if the rest of the context in a given episode has never been seen by the model before, as long as this sub-scenario has been seen, the model has some indication of the value associated with each action. Even when restricting the set of entities to form sub-groups with to those that we know can be relevant to each agent (REFIL (Randomized Oracle)) we find that performance does not significantly improve. These results suggest that randomized sub-group formation for REFIL is a viable strategy (vs attempting to learn which entities are relevant and selecting sub-groups from there), and the main benefit of our approach is to promote generalization across scenarios by breaking value function predictions into reusable components.
4.2 STARCRAFT
We next test on the StarCraft multi-agent challenge (SMAC) (Samvelyan et al., 2019). The tasks in SMAC involve micromanagement of units in order to defeat a set of enemy units in battle. Specifically, we extend SMAC to settings with variable types and quantities of agents. We hypothesize that our approach is especially beneficial in this setting, as it should encourage of models to identify independence between entities and generalize to more diverse settings as a result. The dynamic setting requires some small modifications to SMAC, though we aim to change the environment as little as possible to maintain the challenging nature of the tasks. In the standard version of SMAC, both state and action spaces depend on a fixed number of agents and enemies, so our modifications, discussed in detail in the appendix, alleviate these problems.
In our tests we evaluate on three settings we call 3-8sz, 3-8csz, and 3-8MMM. 3-8sz pits symmetrical teams of between 3 and 8 agents against each other where the agents are a combination of Zealots and Stalkers (similar to the 2s3z and 3s5z tasks in the original SMAC). 3-8csz pits symmetrical teams of between 0 and 2 Colossi and 3 to 6 Stalkers/Zealots against each other (similar to 1c3s5z). 3-8MMM pits symmetrical teams of between 0 and 2 Medics and 3 to 6 Marines/Marauders against each other (similar to MMM and MMM2). As a sanity check, we additionally modify our approach to work with non-attention models such that we can test on the original SMAC tasks against existing methods. These results (located in the appendix) show that we can significantly improve on QMIX (previously state-of-the-art) in 2 of 3 settings tested.
Ablations and Baselines We introduce several ablations of our method, as well as adaptations of existing methods to handle variable sized inputs. These comparisons are summarized in Table 1. QMIX (Attention) is our method without the auxiliary loss. REFIL (VDN) is our approach using summation to combine all factors (a la Value Decomposition Networks (Sunehag et al., 2018)) rather than a non-linear monotonic mixing network. VDN (Attention) does not include the auxiliary loss and uses summation as factor mixing. QMIX (Mean Pooling) is QMIX (Attention) with attention layers replaced by mean pooling. We also test max
pooling but find the performance to be marginally worse than mean pooling. Importantly, for pooling layers we add entity-wise linear transformations prior to the pooling operations such that the total number of parameters is comparable to attention layers.
For baselines we consider some follow-up works to QMIX that attempt to improve the mixing network to be more expressive: QTRAN (Son et al., 2019) and Qatten (Yang et al., 2020). We additionally consider an alternative mechanism for aggregating information across variable sets of entities, known as Entity Message Passing (EMP) (Agarwal et al., 2019). We specifically use the restricted communication setting where agents can only communicate with agents they observe, and we set the number of message passing steps to 3. Finally, we compare to a method that builds on QMIX by attempting to learn dynamic roles that depend on the context each agent observes: ROMA (Wang et al., 2020a). For all approaches designed for the standard SMAC setting, we extend them with the same multi-head attention architecture that our approach uses.
Results and Discussion Our results on challenges in dynamic STARCRAFT settings can be found in Figure 4. We find that REFIL outperforms all ablations consistently in these settings. REFIL (VDN) performs much worse than our approach and VDN (Attention), highlighting the importance of the mixing network to handle contextual dependencies between entity partitions. Since the trajectory of a subset of entities can play out differently based on the surrounding context, it’s important for our factorization approach to recognize and adjust for these situations. The mixing network handles these dependencies by a) incorporating global state information into the mixing procedure, and b) mixing utilities in a non-linear monotonic fashion, rather than summing as in VDN. As such, the increased representative capacity of the QMIX mixing network, relative to VDN, is crucial. The use of mean-pooling in place of attention also performs poorly, indicating that attention is valuable for aggregating information from variable length sets of entities.
With respect to the baselines, we also find that REFIL consistently outperforms other methods, highlighting the unique challenge of learning in such dynamic settings where entity types are variable at each episode. The improvements that ROMA, Qatten, and QTRAN seen in other settings over QMIX, do not appear to manifest themselves in this setting. Moreover, the entity aggregation method of EMP does not improve performance over the standard MHA module that we use, likely due to the fact that EMP is most effective in settings where partial observability is a major hindrance to successful task completion. In this way, the target of EMP and REFIL are
opposite, as the goal of REFIL is to ignore extraneous information when possible during training to improve knowledge transfer.
In order to understand the role of training as an auxiliary objective (rather than entirely replacing the objective) we vary the value of λ to interpolate between two modes: λ = 0 is simply QMIX (Attention), while λ = 1 trains exclusively with random factorization. Our results (Figure 5) show that, similar to regularization methods such as Dropout (Srivastava et al., 2014), there is a sweet spot where performance is maximized before collapsing catastrophically. Training exclusively with random factorization does not learn anything significant. This failure is likely due to the fact that we use the full context in our targets for learning with imagined scenarios as well as when executing our policies, so we still need to learn with it in training.
Finally, we consider a qualitative experiment to highlight the sort of common patterns that REFIL is able to leverage (Figure 6). Zealots (the only melee unit present) are weak to Colossi, so they learn to hang back and let other units engage first. Then, they jump in and intercept the enemy Zealots while all other enemy units are preoccupied, leading to a common pattern of a Zealot vs. Zealot skirmish (highlighted at t=15). REFIL enables behaviors learned in these types of sub-groups to be applied more effectively across all unique unit type configurations. By sampling groups from all entities randomly, we will occasionally end up with sub-groups that include only Zealots, and the value function predictions learned in these sub-groups can be applied not only to the episode at hand, but to any episode where a similar pattern emerges.
5 RELATED WORK
Multi-agent reinforcement learning (MARL) is a broad field encompassing cooperative (Foerster et al., 2018; Rashid et al., 2018; Sunehag et al., 2018), competitive (Bansal et al., 2018; Lanctot
et al., 2017), and mixed (Lowe et al., 2017; Iqbal & Sha, 2019) settings. This paper focuses on cooperative MARL with centralized training and decentralized execution (Oliehoek et al., 2016, CTDE). Our approach utilizes value function factorization, an approach aiming to simultaneously overcome limitations of both joint Hausknecht (2016) and independent learning Claus & Boutilier (1998) paradigms. Early attempts at value function factorisation require apriori knowledge of suitable per-agent team reward decompositions or interaction dependencies. These include optimising over local compositions of individual Q-value functions learnt from individual reward functions (Schneider et al., 1999), as well as summing individual Q-functions with individual rewards before greedy joint action selection (Russell & Zimdars, 2003). Guestrin et al. (2002); Kok & Vlassis (2006) factorise the total Q-value function using coordination graphs based on interaction dependencies for the task at hand, similarly to max-plus approaches Kuyer et al. (2008); Pol & Oliehoek (2016). Recent approaches from cooperative deep multi-agent RL allow for value factorisations to be learnt from experience from a single team reward function and no prior knowledge of interaction dependencies. Value-Decomposition Networks (VDN) (Sunehag et al., 2018) decompose the joint Q-value function into a sum of local utility functions used for greedy action selection. QMIX Rashid et al. (2018) extends such additive decompositions to general monotonic functions. Several works extend QMIX to improve the expressivity of mixing functions (Son et al., 2019; Yang et al., 2020), learn latent embeddings to help exploration (Mahajan et al., 2019) or learn dynamic roles (Wang et al., 2020a), and encode knowledge of action semantics into network architectures (Wang et al., 2020b).
Several recent works have addressed the topic of generalization and transfer across environments with varying agent quantities, though the learning paradigms considered and assumptions made differ from our approach. Carion et al. (2019) devise an approach for assigning agents to tasks, assuming the existence of low-level controllers to carry out the tasks, and show that it can scale to much larger scenarios than those seen in training. Burden (2020) propose a transfer learning approach using convolutional neural networks and grid-based state representations to scale to scenarios of arbitrary size. Agarwal et al. (2019) introduce an entity message passing framework to enable agents to attend to specific entities, of which there may be a variable amount, based on their local context, similar to the multi-head attention module we use in our approach. Several approaches devise attention or graph-neural-network based models for handling variable sized inputs and focus on learning curricula to progress on increasingly large/challenging settings (Long et al., 2020; Baker et al., 2019; Wang et al., 2020c). In contrast to these curriculum learning approaches, we focus on training simultaneously on scenarios of varying sizes and specifically focus on developing a training paradigm for improving knowledge sharing across such settings to accelerate learning.
6 CONCLUSION
In this paper we consider a MARL setting where we aim to learn policies to control teams of agents in scenarios with varying types and quantities of entities. We propose REFIL, an approach that regularizes value functions to identify independence relationships between entities, in turn promoting generalization and knowledge transfer within and across multi-agent settings with varying quantities of agents. Our results show that our contributions yield performance improvements in complex cooperative tasks. In future work, we hope to explore alternative methods for learning independence relationships between entities beyond randomized partitions.
A ATTENTION LAYERS AND MODELS
Attention models have recently generated intense interest due to their ability to incorporate information across large contexts. Importantly for our purposes, they are able to process variable sized sets of inputs.
We now formally define the building blocks of our attention models. Given the input X , a matrix where the rows correspond to entities, we define an entity-wise feedforward layer as a standard fully connected layer that operates independently and identically over entities:
eFF(X;W , b) = XW + b>,X ∈ Rnx×d,W ∈ Rd×h, b ∈ Rh (4) Now, we specify the operation that defines an attention head, given the additional inputs of S ⊆ Z[1,nx], a set of indices that selects which rows of the input X are used to compute queries such that XS ∈ R|S|×d, and M , a binary obserability mask specifying which entities each query entity can observe (i.e. Mi,j = 1 when i ∈ S can incorporate information from j ∈ Z[1,n
x] into its local context):
Atten(S,X,M ;WQ,WK ,W V ) = softmax ( mask ( QK>√
h ,M
)) V ∈ R|S|×h
(5)
Q = XS,∗W Q,K = XWK ,V = XW V , M ∈ {0, 1}|S|×nx ,WQ,WK ,W V ∈ Rd×h
(6)
The mask(Y ,M) operation takes two equal sized matrices and fills the entries of Y with −∞ in the indices whereM is equal to 0. After the softmax, these entries become zero, thus preventing the attention mechanism from attending to specific entities. This masking procedure is used in our case to uphold partial observability. Only one attention layer is permitted in the decentralized execution setting; otherwise information from unseen agents can be propagated through agents that are seen. WQ, WK , and W V are all learnable parameters of this layer. Queries, Q, can be thought of as vectors specifying the type of information that an entity would like to select from others, while keys, K, can be thought of as specifying the type of information that an entity possesses, and finally, values, V , hold the information that is actually shared with other entities.
We define multi-head-attention as the parallel computation of attention heads as such: MHA (S,X,M) = concat ( Atten ( S,X,M ;WQj ,WKj ,W Vj ) , j ∈ ( 1 . . . nh )) (7)
The size of the parameters of an attention layer does not depend on the number of input entities. Furthermore, we receive an output vector for each query vector.
B AUGMENTING QMIX WITH ATTENTION
The standard QMIX algorithm relies on a fixed number of entities in three places: inputs of the agent-specific utility functions Qa, inputs of the hypernetwork, and the number of utilities entering the mixing network, that is, the output of the hypernetwork. QMIX uses multi-layer perceptrons for which all these quantities have to be of fixed size. In order to adapt QMIX to the variable agent quantity setting, such that we can apply a single model across all episodes, we require components that accept variable sized sets of entities as inputs. By utilizing attention mechanisms, we can design components that are no longer dependent on a fixed number of entities taken as input. We define the following inputs: XEei := s e i , 1 ≤ i ≤ d, e ∈ E ;Mµae := µ(sa, se), a ∈ A, e ∈ E . The matrixXE is the global state s reshaped into a matrix with a row for each entity, andMµ is a binary observability matrix which enables decentralized execution, determining which entities are visible to each agent.
B.1 UTILITY NETWORKS
While the standard agent utility functions map a flat observation, whose size depends on the number of entities in the environment, to a utility for each action, our attention-utility functions can take in a variable sized set of entities and return a utility for each action. The attention layer output for agent a is computed as MHA ({a},X,Mµ), where X is an row-wise transformation of XE (e.g.,
an entity-wise feedforward layer). If agents share parameters, the layer can be computed in parallel for all agents by providing A instead of {a}, which we do in practice.
B.2 GENERATING DYNAMIC SIZED MIXING NETWORKS
Another challenge in devising a QMIX algorithm for variable agent quantities is to adapt the hypernetworks that generate weights for the mixing network. Since the mixing network takes in utilities from each agent, we must generate feedforward mixing network parameters that change in size depending on the number of agents present, while incorporating global state information. Conveniently, the number of output vectors of a MHA layer depends on the cardinality of input set S and we can therefore generate mixing parameters of the correct size by using S = A and concatenating the vectors to form a matrix with one dimension size depending on the number of agents and the other depending on the number of hidden dimensions. Attention-based QMIX (QMIX (Attention)) trains these models using the standard DQN loss in Equation 2.
Our two layer mixing network requires the following parameters to be generated: W1 ∈ R+(|A|×hm), b1 ∈ Rh m , w2 ∈ R+(h m), b2 ∈ R, where hm is the hidden dimension of the mixing network and |A| is the set of agents. Note from Eq. (5) that the output size of the layer is dependent on the size of the query set. As such, using attention layers, we can generate a matrix of size |A| × hm, by specifying the set of agents, A, as the set of queries S from Eq. (5). We do not need observability masking since hypernetworks are only used during training and can be fully centralized. For each of the four components of the mixing network (W1, b1,w2, b2), we introduce a hypernetwork that generates parameters of the correct size. Thus, for the parameters that are vectors (b1 andw2), we average the matrix generated by the attention layer across the |A| sized dimension, and for b2, we average all elements. This procedure enables the dynamic generation of mixing networks whose input size varies with the number of agents. Assuming q = [Q1(τ1, u1), . . . , Qn(τn, un)], then Qtot is computed as:
Qtot(s, τ,u) = σ((q>W1) + b > 1 )w2 + b2 (8)
where σ is an ELU nonlinearity (Clevert et al., 2015).
C ENVIRONMENT DETAILS
C.1 STARCRAFT WITH VARIABLE AGENTS AND ENEMIES
The standard version of SMAC loads map files with pre-defined and fixed unit types, where the global state and observations are flat vectors with segments corresponding to each agent and enemy. Partial observability is implemented by zeroing out segments of the observations corresponding to unobserved agents. The size of these vectors changes depending on the number of agents placed in the map file. Furthermore, the action space consists of movement actions as well as separate actions to attack each enemy unit. As such the action space also changes as the number of agents changes.
Our version loads empty map files and programmatically generates agents, allowing greater flexibility in terms of the units present to begin each episode. The global state is split into a list of equal-sized entity descriptor vectors (for both agents and enemies), and partial observability is handled by generating a matrix that shows what entities are visible to each agent. The variable-sized action space is handled by randomly assigning each enemy a tag at the beginning of each episode and designating an action to attack each possible tag, of which there are a maximum number (i.e. the maximum possible number of enemies across all initializations). Agents are able to see the tag of the enemies they observe and can select the appropriate action that matches this tag in order to attack a specific enemy.
D EXPERIMENTAL DETAILS
Our experiments were performed on a desktop machine with a 6-core Intel Core i7-6800K CPU and 3 NVIDIA Titan Xp GPUs, and a server with 2 16-core Intel Xeon Gold 6154 CPUs and 10 NVIDIA Titan Xp GPUs. Each experiment is run with 8 parallel environments for data collection and a single GPU. REFIL takes about 24 hours to run for 10M steps on STARCRAFT. QMIX (Attention) takes
about 16 hours for the same number of steps on STARCRAFT. Reported times are on the desktop machine and the server runs approximately 15% faster due to more cores being available for running the environments in parallel.
E HYPERPARAMETERS
Hyperparameters were based on the PyMARL (Samvelyan et al., 2019) implementation of QMIX and are listed in Table 2. All hyperparameters are the same in all STARCRAFT settings. Since we train for 10 million timesteps (as opposed to the typical 2 million in standard SMAC), we extend the epsilon annealing period (for epsilon-greedy exploration) from 50,000 steps to 500,000 steps. For hyperparameters new to our approach (hidden dimensions of attention layers, number of attention heads, λ weighting of imagined loss), the specified values in Table 2 were the first values tried, and we found them to work well. The robustness of our approach to hyperparameter settings, as well as the fact that we do not tune hyperparameters per environment, is a strong indicator of the general applicability of our method.
F ADDITIONAL RESULTS
We test a modified non-attention version of our approach along with state of the art methods on the standard version of SMAC, where entity types are constant at the start of each episode. Since the number and type of agents and enemies is constant at each episode, observations and states can be represented as fixed-size vectors. We can thus use MLPs as models (as is standard in the literature) for these tasks and adapt our approach to suit this setting while comparing to unmodified versions of existing approaches. Rather than masking an attention mechanism, we simply zero out the features in the observations and states that correspond to entities we would like to mask out. These experiments are performed in order to compare our approach to results validated in the literature.
We compare against QMIX (Rashid et al., 2018) and VDN (Sunehag et al., 2018), as well as an ablation of our approach that uses additive mixing (a la VDN) of entity partition factors instead of a mixing network which we call REFIL (VDN). We use the architectures and hyperparameters from the QMIX (Rashid et al., 2018) paper in these settings.
Results can be found in Figure 7. While we expect REFIL to be most effective in the setting of varying types and quantities of agents, we still find that it improves on QMIX in 2 of the 3 scenarios tested. In the standard SMAC benchmarks, we find our approach is able to match or outperform the best baseline across all settings. Specifically, our factorization method (which builds on QMIX) improves the performance of QMIX in 2 of 3 settings tested. As far as we are aware REFIL outperforms all reported results in the literature on “2c vs 64 zg” (clasified as a “hard” task in SMAC). The relative improvement over QMIX, combined with the fact that it does not ever appear to hurt performance, indicates that the benefits of our method are not limited to settings with varying types and quantities of agents, though the positive effects are more pronounced there. | 1. What are the strengths and weaknesses of the proposed randomized entity-based attentional mechanism for multi-agent reinforcement learning?
2. How does the paper's approach compare to related works, specifically Agarwal et al. (2020)?
3. Are there any concerns regarding the scalability of the proposed method, especially in terms of the search space and computational complexity?
4. Do you have any suggestions for improving the paper or its contributions? | Review | Review
This paper introduces a randomized entity-based attentional mechanism to regularize the observation space for efficient multi-agent reinforcement learning. Specifically, the authors expect that their method can help agents focus on entities that are relevant to their decision-making process. The aim of the paper is well-positioned in multi-agent settings and are expected to help improve performance by exploiting the loosely coupled structure of multi-agent tasks (although the authors do not explicitly model the decision dependency among agents). However, I have some doubts about whether the proposed method can address the target of the paper.
Intuitively, in multi-agent settings, the number of entities is at least the number of agents, say O(n). O(n) is a quite optimistic estimation because we even do not consider other entities. If the authors want to find an optimal bi-partition over the entity space for each agent, the search space is at least O(2^n), which grows exponentially with the number of agents. To design an efficient search algorithm over such a large space needs to take advantage of some well-designed inductive bias or heuristics. The authors use a random strategy here, which, in my opinion, is not sufficient to guarantee a satisfactory solution. There is no denying that randomization can give a good solution in some cases, but this can not be held as a general rule. Even if the bi-partition structure is trained end-to-end, I also suspect that learning such a structure is not easier than learning from scratch.
Additionally, I think the authors largely ignore the contribution of a related work [Agarwal et al., 2020]. Although they cite this paper in Sec. 5, unlike what is stated in this paper, the main contribution of [Agarwal et al., 2020] is a GNN-based attentional mechanism over entity spaces. They propose to let agents learn to attend to different entities under different observations. In this way, the target of [Agarwal et al., 2020] and this paper largely overlap. I was expecting that the authors provide a thorough comparison with [Agarwal et al., 2020] in their experiments. If the authors can demonstrate that their method can outperform [Agarwal et al., 2020], I will consider improve my rating.
[Agarwal et al., 2020] Agarwal, A., Kumar, S., Sycara, K. and Lewis, M., 2020, May. Learning Transferable Cooperative Behavior in Multi-Agent Teams. In Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems (pp. 1741-1743). |
ICLR | Title
Disentangled generative models for robust dynamical system prediction
Abstract
Deep neural networks have become increasingly of interest in dynamical system prediction, but out-of-distribution generalization and long-term stability still remains challenging. In this work, we treat the domain parameters of dynamical systems as factors of variation of the data generating process. By leveraging ideas from supervised disentanglement and causal factorization, we aim to separate the domain parameters from the dynamics in the latent space of generative models. In our experiments we model dynamics both in phase space and in video sequences and conduct rigorous OOD evaluations 1. Results indicate that disentangled VAEs adapt better to domain parameters spaces that were not present in the training data. At the same time, disentanglement can improve the long-term and out-of-distribution predictions of state-of-the-art models in video sequences.2
1 INTRODUCTION
The robust prediction of dynamical systems behaviour remains an open question in machine learning, and engineering in general. The ability to make robust predictions is important not only for forecasting systems of interest like weather (Garg et al., 2021; Ravuri et al., 2021) but even more so because it enables innovations in fields like system control, autonomous planning (Hafner et al., 2018) and computer aided engineering (Brunton et al., 2020). In this context, the use of deep generative models has recently gained significant traction for sequence modelling (Girin et al., 2020).
Robustness of machine learning models can be considered along two axes: long-term prediction and out-of-distribution (OOD) performance. Accurate long-term prediction can be notoriously difficult in many dynamical systems, because error accumulation can diverge in finite time (Zhou et al., 2020; Raissi et al., 2019), a problem that even traditional solvers can suffer from. More importantly, machine learning techniques are known to suffer from poor OOD performance (Goyal & Bengio, 2020), i.e. when they are employed in a setting they had not encountered during the training phase.
Before addressing the OOD problem, we must first define what constitutes as OOD in dynamical systems. We start by the observation that even simple dynamical systems, i.e the swinging pendulum or the =-body system, can have multiple continuous parameters that affect their evolution. These parameters can be manifested as differential equation coefficients, boundary or initial conditions etc. Our starting point is to consider distinct ranges of those parameters as separate domains. Under this view, it becomes apparent why OOD prediction of dynamical systems can be hard: capturing the whole range of those parameters in a single training set is unrealistic (Fotiadis et al., 2020) and further inductive biases are required (Miladinović et al., 2019; Bird & Williams, 2019; Barber et al., 2021). From a dynamical system point of view, different parameters can produce widely different trajectories in phase space. A motivating example can be bifurcations which occur when a small change in the parameters of a system causes a sudden qualitative change in its behaviour.
We focus on the inductive bias of disentangled representations for which the dynamics are separated from the domain parameters. Many approaches based on the use of neural networks try to jointly learn the dynamics and the physical parameters, which results in convoluted representations and usually leads to overfitting (Bengio et al., 2012). System identification can be used to extract parameters, but
1Code for reproducing our experiments at: https://anonymous.4open.science/r/ dis-dyn-systems/
2Animated phase-space and video predictions are available at: https://bit.ly/dis-dyn-systems
requires knowledge of the underlying system to be computationally effective (Ayyad et al., 2020). We, instead, leverage advances in Variational Autoencoders (VAEs) (Kingma & Welling, 2014) that enable learning disentangled representations. Disentanglement enables different latent variables to focus on different factors of variation of the data distribution, and has been applied in the context of image generation (Higgins et al., 2017; Kim & Mnih, 2018). This can be extended to modelling dynamical systems by looking at disentanglement from a causal perspective: from all the generative models which can have the same marginal distribution, identify the one with the true causal factors. To map this idea to sequence modelling we treat the domain parameters of a dynamical system as factors of variation. Recent findings (Locatello et al., 2018) emphasize the vital role of inductive biases from models or data for useful disentanglement. Unsupervised disentanglement, based on the assumption of domain stationarity, is a promising direction (Miladinović et al., 2019; Li & Mandt, 2018). Nevertheless, this leaves a wealth of ground truth domain parameters, which can be cheaply collected in simulated datasets. This type of privileged information originating from simulations has been shown to be effective for domain adaptation in computer vision tasks (Sarafianos et al., 2017; Lee et al., 2018). We thus use supervised disentanglement (Locatello et al., 2019) by leveraging the ground truth domain parameters. To the best of our knowledge, using domain parameters information this way, has not been previously explored in the dynamical system prediction setting.
Contributions While others have treated domain parameters as factors of variation in the data distribution, our work is the first, to the best of our knowledge, that explicitly uses privileged information from simulated data to disentangle those domain parameters from dynamics in a supervised way. We furthermore conduct experiments both in the low-dimensional phase space of 3 dynamical systems and the high-dimensional video rendering of a swinging pendulum. Disentanglement has, in the past, been mostly applied to VAEs because they are easily amenable to it. We additionally apply disentanglement on a more powerful, hybrid, model with both stochastic and deterministic parts (Hafner et al., 2018). In doing so, we not only assess disentanglement on a generative model outside boundaries of VAEs but furthermore we do it on a model which is considered state-of-the-art in long-term video prediction (Saxena et al., 2021). In all cases, the prediction performance is assessed both in-distribution and also in OOD settings of increasing degrees of distribution shift. To our understanding, this is the first time such a rigorous OOD test is performed. Our results in phase-space demonstrate that disentangled models can better capture the variability of dynamical systems compared to baseline models both in-distribution and OOD. In modelling dynamics in video sequences, results indicate that disentanglement is beneficial both for long-term prediction and OOD prediction.
Limitations This work focuses on dynamical system prediction. While the results can potentially open up many applications in general time-series modelling, this is out of the scope of this work. We prioritize to empirically study OOD downstream task performance and the inspection of the disentangled representations with appropriate metrics is left out of scope in this work.
2 RELATED WORK
VAEs and disentanglement Disentanglement aims to produce representations where separate factors of variation in the data are encoded into independent latent components. This can be seen as finding the true causal model of the data. While supervised disentanglement in generative models is a long-standing idea (Mathieu et al., 2016), information-theoretic properties can be leveraged to allow unsupervised disentanglement in VAEs (Higgins et al., 2017; Kim & Mnih, 2018). The impossibility result from (Locatello et al., 2018) suggested that disentangled learning is only possible by inductive biases coming either from the model or the data. Hence, the focus shifted back to semi- or weaklysupervised disentanglement approaches (Locatello et al., 2019; 2020). While most of these methods focus on disentanglement metrics, we opt to directly assess using a downstream prediction task.
Disentanglement in sequence modelling While disentanglement techniques are mainly tested in a static setting, there is a growing interest in applying it to sequence dynamics. Using a bottleneck based on physical knowledge, Iten et al. (2018) learn an interpretable representation that requires conditioning the decoder on time, but it can return physically inconsistent predictions in OOD data (Barber et al., 2021). Deep state-space models (SSMs) have also employed techniques for disentangling content from dynamics (Fraccaro et al., 2017; Li & Mandt, 2018), but, focus mostly on modelling variations in the content, failing to take dynamics into account. In hierarchical approaches (Karl et al., 2017), different layers of latent variables correspond to different timescales: for example,
in speech analysis for separating voice characteristics and phoneme-level attributes (Hsu et al., 2017). In an approach similar to our work, Miladinović et al. (2019) separate the dynamics from sequencewide properties in dynamical systems like Lotka-Volterra, but do so in an unsupervised way which dismisses a wealth of cheap information and only assesses OOD generalization in a limited way.
Feed-forward models for sequence modelling Deep SSM models are difficult to train as they require non-trivial inference schemes and a careful design of the dynamic model (Krishnan et al., 2015; Karl et al., 2017). Feed-forward models, with necessary inductive biases, have been used for sequence modelling in dynamical systems (Greydanus et al., 2019; Fotiadis et al., 2020). In works like Hamiltonian Neural Networks Greydanus et al. (2019) the domain is fixed; together with Barber et al. (2021), our work is an attempt in tackling domain variability.
Privileged information for domain adaptation. Using privileged information during training has been shown to help with domain adaptation in computer vision tasks. Using segmentation masks of simulated urban scenery can improve semantic segmentation on the target domain (Lee et al., 2018), while clip art data can help with domain transfer in an action recognition task (Sarafianos et al., 2017).
3 METHODS
3.1 VARIATIONAL AUTOENCODERS
Variational autoencoders (VAEs) (Kingma & Welling, 2014) offer a principled approach to latent variable modeling by combining a variational inference model @q (z |x) with a generative model ?\ (x|z). As in other approximate inference methods, the goal is to maximize the evidence lower bound (ELBO) over the data:
Lq,\ (x) = E@q (z |x) [log ?\ (x | z)] − ! (@q (z | x) | |?(z)) (1)
The first part of the ELBO is the reconstruction loss (in our case the prediction loss) and the second part is the Kullback-Leibler divergence that quantifies how close the posterior is to the prior.
Design choices for the model We use an isotropic unit Gaussian prior ?(z) = N(z | 0, I) which helps to disentangle the learned representation (Higgins et al., 2017). The approximate posterior (encoder) distribution is a Gaussian with diagonal covariance @q (z | x) = N (z | µI , I), allowing a closed form KL-divergence, while the decoder has a Laplace distribution ?\ (x | z) = Laplace (x | µG , WI) with constant diagonal covariance W > 0, which is tuned empirically. This leads to an !1 loss that provides improved results in some problems (Mathieu et al., 2018) and empirically works better in our case. The parameters µI ≡ µI (x; q), I ≡ diag [σI (x; q)]2, and µG ≡ µG (z; \) are computed via feed-forward neural networks.
3.2 DISENTANGLEMENT OF DOMAIN PARAMETERS IN LATENT SPACE
Apart from the disentanglement that stems from the choice of prior ?(z), we explicitly disentangle part of latent space so that it corresponds to the domain parameters of each input sequence. We achieve this by using a regression loss term Lξ (z1:: , ξ) between the ground truth factors of the domain parameters ξ ∈ R: and the output of the corresponding latents, z1:: . We, empirically, opted for an !1 loss, corresponding to a Laplacian prior with mean ξ and unitary covariance. Previous methods have reported that binary cross-entropy works better than !2 (Locatello et al., 2019) but this does not fit well in a setting like ours. We hypothesize that BCE works better because of the implicit scaling. To address this, we propose applying a function G(`I8 ) which linearly scales the `I8 between the min and max values of the corresponding factor of variation:
G ( `I8 ) = `I8 · (max(b8) −min(b8)) +min(b8) (2)
where b8 are the domain parameters and their corresponding minimum and maximum values of domain parameters from the training set. In all cases, the regression term is weighted by a parameter X which is empirically tuned. Plugging these choices in results in the following loss function:
Lq,\ (x) =E@q (z |x1:=) [ 1 W ‖x=+1:=+> − µG (z; \)‖1 ] + 3 log W (Prediction loss)
+ ‖σI (x1:=; q)‖22 − log diag [σI (x1:=; q)]2 + ‖µI (x1:=; q)‖22 (KL-Divergence) (3) + X ξG − G (µI1:: (x1:=; q)) 1} (Sup. disentangl. loss)
Using the reparameterization trick (Kingma & Welling, 2014), the loss is amenable to optimzation by stochastic gradient descent, with batch size =. The model architecture can be seen in Figure 1(left).
3.3 DISENTANGLEMENT FOR VIDEO DYNAMICS
We further investigate the effect of disentanglement in video sequence dynamics. To this end, two generative models are used. The first is derived from the VAE formulation of the previous section and is called CNN-VAE and is similar to the VAE with the addition of a convolutional encoder and a decoder. The encoder projects the input frames down to a low-dimensional space which can be thought as equivalent to the phase space of the system. A VAE is applied in this projection to predict in the future coordinates in the ”phase space”. The decoder then maps the predictions of the VAE back to pixel space. The schematic of the model can be seen in Figure 1(right).
The second model we use is the Recurrent State Space Model (RSSM) which has been successfully used for planning (Hafner et al., 2018). Since RSSM is a hybrid model combining deterministic and variational components, it allows us to assess disentanglement outside the limited scope of VAEs. Furthermore, using a state-of-the-art model in long-term video prediction (Saxena et al., 2021), allows
us to identify the limits of applying disentanglement in competitive models. The loss function we use shares the same formulation as in the original work of Hafner et al. (2018) with the addition of the supervised disentanglement loss. Since in the RSSM formulation there are latent variables for each time-step, we apply a disentanglement loss on all of them, which empirically is set to be !2:
L'(("−( = )∑ C=1 ©«E@ (st |o≤t [ln ?(ot | st )]︸ ︷︷ ︸reconstruction −E@ (st−1 |o≤t−1) [KL[@(st | o≤t )‖?(st | st−1)]]︸ ︷︷ ︸prediction + XE@ (st |o≤t )
[ ξ − s(1::)t 2]︸ ︷︷ ︸ supervised disentanglement loss ª®®®®®¬ (4)
Where ot is the observations, st the stochastic latent variables at time C, ξ are the : dimensional domain parameters and X tunes the supervised disentanglement strength.
4 EXPERIMENT - ODE PHASE SPACE DYNAMICS
4.1 DATASETS
In the phase-space experiments we compare the models on three well studied dynamical systems, the swinging pendulum, the Lotka-Volterra equations used to model prey-predator populations, and the planar 3-body system:
Under review as a conference paper at ICLR 2022
1.0 0.5 0.0 0.5 1.0
2
1
0
1
2
Pendulum
1.0 0.5 0.0 0.5 1.0
2
1
0
1
2
Pendulum
3 4 5 Prey
1.5
2.0
2.5
3.0
Pr ed
at or
Lotka-Volterra
3 4 5 Prey
1.5
2.0
2.5
3.0
Pr ed
at or
Lotka-Volterra
1 0 1 x
1.5
1.0
0.5
0.0
0.5
1.0
y
3-body system
1 0 1 x
1.5
1.0
0.5
0.0
0.5
1.0
y
3-body system
Ground truth MLP VAE VAE-SD Noisy input
The systems where chosen for varied complexity in terms of degrees of freedom, number of ODE equations and factors of variation. For the pendulum we consider one factor of variation, its length ;; Lotka-Volterra has 4 factors of variation U, V, W, X and the 3-body system has also 4 factors of variation 1, <1, <2, <3. Factors are drawn uniformly from a predetermined range which is the same between the training, validation and test sets. To further assess the OOD prediction accuracy, we create two additional test sets with factor values outside of the original range. We denote these datasets as OOD Test-set Easy and Hard, representing a smaller and bigger shift from the original range. As a visual example, the distribution of the factors of variation for the Lotka-Volterra system is illustrated in Figure 9 of the Appendix. The data were additionally corrupted with Gaussian noise. Dataset details can be found on Table 1 of the Appendix.
4.2 MODELS AND TRAINING
The main goal of these experiments is to assess whether OOD prediction can be improved by disentangling dynamical system parameters in the latent space of VAEs. We opt to use simple models to allow more experiments and comparisons. Our main baseline is the VAE upon which we propose two enhancements that leverage supervised disentanglement. The first VAE model, termed VAE-SD uses supervised disentanglement without a scaling function while the second model termed VAE-SSD uses an additional linear scaling function G(`I8 ) for the latent variable mean vector `I8 , as described in Section 3.2. Another baseline is a multilayer perceptron (MLP) autoencoder which allows comparison with a deterministic counterpart of the VAE. We also use supervised disentanglement on the latent neurons of the MLP, a model we refer to as MLP-SD. This enables us to assess if the privileged information can improve deterministic models. Lastly, we include an LSTM model, a popular choice for low dimensional sequence modelling (Yu et al., 2019), as a representative recurrent method.
Early experiments revealed a significant variance on the performance of the models, depending on hyperparameters. Under these conditions, we took various steps to make model comparisons as fair as possible. Firstly, all models have similar capacity in terms of neuron count. Secondly, we tune various hyperparameter dimensions, some of which are shared, while others are model-specific. Third, we conduct a thorough grid search on the hyperparameters to avoid undermining a model (details can be found in Tables 3, 4 and 5 of the Appendix). Lastly, we train the same number of experiments for all models which amounts to 1440 trained models in total, as summarized in Table 2 of the Appendix.
SSIM
0 200 400 600 800 Timestep
0.80
0.85
0.90
0.95
1.00 Test-set
RSSM RSSM-SD CNN-VAE CNN-VAE-SD
0 200 400 600 800 Timestep
0.80
0.85
0.90
0.95
1.00 OOD Test-set Easy
0 200 400 600 800 Timestep
0.80
0.85
0.90
0.95
OOD Test-set Hard
PSNR
4.3 RESULTS
For each dynamical system we focus on the performance on the three test-sets, the in-distribution test set, which shares the same parameter distribution with the training set, and the two OOD test-sets (Easy and Hard), which represent an increasing parameter shift from the training data. Models are compared on the cumulative Mean Absolute Error (MAE) between prediction and ground truth for the first 200 time-steps. We consider this to be sufficiently long-term as it is at least 20 times longer than the prediction horizon used during training. Long predictions are obtained by re-feeding the model predictions back as input. This approach has been shown to work well in systems where the dynamics are locally deterministic (Fotiadis et al., 2020). A summary of the quantitative results can be found in Figures 3 & 4 and Table 8. To account for the variability in the results, we present a summary of the best 5 runs of each model, selected by validation MAE. We generally observe that model performance is correlated to the distribution shift of test-sets, and this is consistent for all systems and models. The MAE is increasing as we move from the in-distribution test-set to the OOD Easy and Hard test-sets. Nevertheless, not all models suffer equally from the OOD performance drop.
Comparing the VAEs (Figure 3), we see that disentangled VAE models offer a substantial and consistent improvement over the VAE across all 3 dynamical systems. The improvement is more pronounced for the OOD test-sets where the distribution shift is greater, a strong indication that disentanglement of domain parameters is an inductive bias that can lead to better generalization. We also observe that VAE-SSD models the in-distribution data better that VAE-SD. This seems to come at a slight overfitting cost, because the VAE-SD provides better OOD extrapolation in most cases. This could be explained because the scaling function is dependent on min and max values of the factors of the training set. The extra information allows the model to better capture the training data but sacrifices some generalization capacity.
On the other hand, disentanglement results for the MLP are mixed. While in-distribution MLP-SD offers better results than the plain MLP, on the OOD test-sets, MLP-SD only performs favourably in the pendulum data. Furthermore in Lotka-Volterra, MLP-SD models are very unstable, and this is a drawback that affects some VAE-SD model too (see Table 9 of the Appendix). Probabilistic models seem better suited to capture the variation in the data. The contrast between VAE-SD and MLP-SD illustrates that making use of privileged information and latent space disentanglement are not trivial
and more work is needed to help us understand what works in practice and why. Lastly, the LSTM (Figure 11 & Table 8 of the Appendix) is only comparable in the pendulum dataset and only for small OOD shifts. Qualitative predictions can be found in Figure 5.
5 EXPERIMENT - VIDEO SEQUENCE DYNAMICS
In the first experiment we assessed supervised disentanglement for phase space prediction, where the states of the input trajectories are fully observable and only the domain parameters are unknown. This experiment extends the idea of supervised disentanglement to pixel-space input and output, where the physical states have to be inferred by the model.
5.1 DATASETS
The dynamical system we use in this experiment is the swinging pendulum, a common benchmark for modelling dynamics in video sequences (Brunton et al., 2020; Barber et al., 2021). We consider 4 factors of variation, the length ;, gravity 6 and initial angle \ and angular velocity l. Factors are drawn uniformly from a predetermined range. As before, we create a test-set and two additional OOD test-sets (Easy and Hard). The OOD sets have length and gravity values outside of the original range, while the initial conditions \, l are drawn from the same distribution. The distribution of the factors of variation for the test-sets is illustrated in Figure 2. The trajectories are first computed in phase space using a numerical simulator and then rendered as video frames of 64 × 64 pixels. More details about the dataset can be found in Section A.2 of the Appendix.
5.2 MODELS AND TRAINING
In this experiment we use two different models CNN-VAE and RSSM. CNN-VAE is described in Section 3.3 and architectural details can be found in Section B.2.1. During training the CNN-VAE the inner VAE is recursively used to predict, the number of recursions being a hyperparameter (Table 6 of the Appendix). We found that this type of training leads to more stable long term predictions. In total, 48 CNN-VAE models are trained half of which are with supervised disentanglement (CNN-VAE-SD). The RSSM model is a generative model including both a stochastic and deterministic component. We only use supervised disentanglement on the stochastic part, and term that model RSSM-SD. Disentanglement is applied all four factors of variation of the domain, despite only length and gravity varying between datasets. Detailed architectural and training details can be found in Section B.2.2 of the Appendix.
5.3 RESULTS
Figure 7 shows the quality of predictions on video pendulum on two metrics: structural similarity (SSIM) and peak signal-to-noise ratio (PSNR) as a function of predicted time distance. We select the models which have the best cumulative metrics over the first 800 timesteps on a validation set.
For the CNN-VAE, effects of disentanglement are more subtle. We observe that, in-distribution, the disentangled CNN-VAE-SD has very similar quality when compared to the CNN-VAE. For the OOD
datasets, though, disentanglement offers improved long-term predictions. The improvement is more noticeable on the OOD test-sets, indicating that disentanglement can help with OOD robustness. For RSSM, we first note that both models perform significantly better than the CNN-VAE models, which is expected since they are considered competitive in long-term video prediction. Disentanglement in RSSM seems to produce a trade-off. The plain RSSM model better in short-term prediction but its performance deteriorates with time, reaching VAE-CNN levels in all metrics. On the other hand, the RSSM-SD model provides the best long-term scores in all metrics and all datasets. Qualitative results in Figure 5 show that almost all models produce accurate short time predictions (approximately up to 200 time-steps). This further stresses the importance of disentanglement for long-term performance. In terms of OOD robustness, disentanglement also appears to be helping. While the RSSM-SD model lacks in short-term prediction quality on the in-distribution test-set, this performance gap closes as the OOD test-sets get harder. More specifically, on the in-distribution test-set the RSSM-SD overtakes RSSM in SSIM after around 400 frames, while in the OOD Easy and Hard test sets, this happens around 350 and 250 time-steps respectively. This narrowing gap indicates robustness improves with increasing distribution shifts. The above findings are corroborated by LPIPS (Zhang et al., 2018) comparisons (Figure 13 and Table 10 of the Appendix). Furthermore, the qualitative results show that all models accurately capture the appearance of the pendulum even long-term. Where they differ is on how well they capture the dynamics of the pendulum movement. This could offer an explanation why disentangling the domain from the dynamics is important, and why in practice offers better long-term and out-of-distribution performance.
Overall, experiments suggest that supervised disentanglement can be used to model dynamical systems in video sequences, resulting in improved long-term and OOD performance.
6 CONCLUSIONS
Using supervised disentanglement of domain parameters in generative models is a promising avenue for improving robustness. Our experiments show that it can improve both OOD generalization and long-term prediction of dynamical systems. This was demonstrated in phase-space with VAEs and also in video sequence modelling with state-of-the-art RSSMs.
By treating the domain parameters as factors of variation of the data and applying supervised disentanglement, several inductive biases are potentially enforced. First, the model in addition to prediction also performs “soft” system identification which acts as a regularizer. Second, it creates an implicit hierarchy such that some latent variables correspond to sequence-wide domain parameters and the rest capture instant dynamics. We speculate that this could additionally make the latent space more interpretable. Third, if the model can correctly extract the parameters this mean that the prediction is based on both of them which is closer to how numerical integrators work, where the domain is known. All of these could lead the model to learn the correct causal structure of the data. Nevertheless, using privileged information for OOD robustness is not always straightforward and requires further exploration. This is evident by the results of the MLP autoencoders which do not yield as consistent improvements. A criticism of our method could be that cheap privileged information is not always available and/or depends on using simulated data. Firstly, training on simulations is an increasingly appealing option because it is a cheap way to generate data to begin with. This is, also, clearly demonstrated by the many advancements on techniques like sim2real (Peng et al., 2017) that try to bring models trained in simulated data to the real world. So there seems to be no reason not to use the privileged information that comes with simulated data. Under that light supervised disentanglement can provide a pathway for real world applications where robustness in dynamical system prediction is critical. Applying the method to other datasets where there are more complex dynamic can increase its relevance. Sequence-wide parameters could also be exploited through self-supervision.
REPRODUCIBILITY STATEMENT
We provide all the necessary code to reproduce our experiments at the anonymous repo https: //anonymous.4open.science/r/dis-dyn-systems (will be de-anonymized after the review process). The repo contains code for generating all the datasets from scratch and also code for training all the models presented in this work. The README also contains instructions on how to train the models. The hyperparameters we have used are clearly and thoroughly presented in the
Appendix. These steps should significantly help others reproduce our experiments. For any further clarifications, you are encouraged to contact the corresponding author(s).
A DATASETS
A.1 PHASE SPACE
For simulations, we use an adaptive Runge-Kutta integrator with a timestep of 0.01 seconds. Each simulated sequence has a different combination of factors of variation. Simulation of the pendulum uses an initial angle \ which is randomly between 10> − 170> while the angular velocity l is 0. For the other two systems the initial conditions are always the same to avoid pathological configurations.
A.2 VIDEO PENDULUM
This data set contains image sequences of a moving pendulum under different conditions. The positions of the pendulum are first computed by a numerical simulator and then rendered in pixel space as frames of dimension 64 × 64. An example image sequence is shown in Figure 10. For the simulations, we use an adaptive Runge-Kutta integrator with a timestep of 0.05 seconds. The length of the pendulum, the strength of gravity and the initial conditions (position, momentum) are set to different values so that each trajectory slightly differs from the others. The initial angle and initial velocity are drawn from the same uniform distribution for all data sets. The initial angle ranges from 30◦ to 170◦ and the initial velocity ranges from −2A03/B to 2A03/B. For training, validation and in-distribution testing set, the gravity ranges from 8.0<2/B to 12.0<2/B, and the pendulum length ranges from 1.20< to 1.40<. In the easy out-of-distribution testing set, the gravity is between 12.0 − 12.5<2/B and the pendulum length is between 1.40 − 1.45<, while in the hard out-of-distribution testing set, the gravity is 12.5−13.0<2/B and the pendulum length is 1.45−1.50<. The distributions of these parameters are shown in Figure 2.
B TRAINING AND HYPERPARAMETERS
B.1 PHASE SPACE
During training the back-propagation is used after a single forward pass. The input and output of the models are smaller than the sequence size, so to cover the whole sequence we use random starting points per batch, both during training and testing. Both the VAE and MLP AE have an encoder with two hidden layers size 400,200 and a reverse decoder. The LSTM model has two stack LSTM cells with hidden size of 100, which results on an equivalent number of neurons. We used the Adam optimizer with 11 = 0.9 and 12 = 0.999. A scheduler for the learning rate was applied whose patience and scaling factor are hyperparameters. Maximum number of epochs was set to 2000 but we employed also early stopping using a validation set which led to significantly less epochs.
Table 5: 3-body system hyperparameters
MLP MLP-SD VAE VAE-SD LSTM
Input Size 50 Output Size 10 Hidden Layers [400, 200] 50,100 Latent Size 8, 16, 32 - Nonlinearity Leaky ReLU Sigmoid Learning rate 10−3, 10−4 10−3, 10−4 10−3, 10−4 10−3 Batch size 16, 32 16 16 16 16, 64, 128 Sched. patience 30, 40, 50, 60 30, 40, 50, 60 30, 40, 50, 60 30, 40, 50, 60 20, 30 Sched. factor 0.3, 0.4 0.3 0.3, 0.4 0.3, 0.4 0.3 Gradient clipping No No No No No Layer norm (latent) No No No No No Decoder W - - 10−5, 10−6 10−5, 10−6 - Sup. scaling - Linear - Linear - Supervision X - 0.05, 0.1, 0.2, 0.3 - 0.1, 0.2 -
# of experiments 96 96 96 96 96
B.2 VIDEO PENDULUM
B.2.1 CNN-VAE MODEL
Encoder has 4 layers convolutional layers with 32, 32, 64 and 64 maps respectively. The filter size is 3, padding is 1 and stride is 2. The last convolutional layer is flattened as a 256-dimensional vector to become the inner VAE input. The decoder 4 convolutional layers (64,64,32,32) with bi-linear upsampling. Model input and out is 20 frames. For the models without supervised disentanglement, a grid search is performed upon the V value, the size of the latent space, and the roll-out length during training. For the models with supervised disentanglement, a grid search is performed upon the V value, the size of the latent space, the time step of the data set, the roll-out length during training and the supervision multiplier. The detailed search grid is summarised in Table 6. Learning rate was 10−3 and an Adam optimizer (11 = 0.9 and 12 = 0.999) was used. We also used early stopping upon the cumulative reconstruction loss for the first 200 steps on a validation set with the max number of epochs being 1000.
B.2.2 RSSM MODELS
For the RSSM model we follow the architecture parameters as described in Hafner et al. (2018) & Saxena et al. (2021). For training we use sequences of 100 frames and batch size 100. All models were trained for 300 epochs with a learning rate of 10−3 and an Adam optimizer (11 = 0.9 and 12 = 0.999). During testing the model uses 50 frames as context (input). The parameters we tune appear in Table 7.
C PHASE SPACE RESULTS
Under review as a conference paper at ICLR 2022
1 0 1
3
2
1
0
1
2
3
Pendulum
1 0 1
3
2
1
0
1
2
3
Pendulum
3 4 5 Prey
1.5
2.0
2.5
3.0
Pr ed
at or
Lotka-Volterra
3 4 5 Prey
1.5
2.0
2.5
3.0
Pr ed
at or
Lotka-Volterra
1 0 1 x
1.5
1.0
0.5
0.0
0.5
1.0
y
3-body system
1 0 1 x
1.5
1.0
0.5
0.0
0.5
1.0
y
3-body system
Ground truth MLP VAE VAE-SD Noisy input
Figure 12: Model predictions in phase space. Trajectories are taken from the OOD Test-Set Hard of each system. The model input is noisy. The circle and bold ‘×’ markers denote the start and end of the ground truth trajectories respectively.
D VIDEO PENDULUM RESULTS
LPIPS | 1. What is the main contribution of the paper, and how does it build upon prior works in disentanglement?
2. How effective is the proposed method in disentangling sequential data, and what quantitative or qualitative analyses could strengthen this claim?
3. How does the supervised disentanglement method compare to unsupervised methods, such as DSSM and Kalman VAE?
4. What limitations does the method have regarding its practical applicability, and how might these be addressed?
5. How robust are the results regarding OOD generalization, and what changes could improve their validity?
6. Have the authors explored other decoder distributions, such as Gaussian, and how might this impact the results?
7. Are perceptual metrics like LPIPS and SSIM sufficient for evaluating dynamic predictions, and would RMSE or NLL provide additional insights?
8. Are there any inconsistencies or typos in the paper's figures, captions, or text that should be addressed? | Summary Of The Paper
Review | Summary Of The Paper
This paper introduces a supervised disentanglement method to learn dynamical systems. The method relies on the provision of privileged information (true parameters of a sequence) in order to disentangle them from observations. The method is evaluated on three toy datasets.
Review
My main issue is the limited novelty of the proposed method. This method is a straightforward extension of the unsupervised disentangled state-space model (Miladinovic et al.) to a supervised one, where the privileged information regarding domain parameters is explicitly fed to the model.
Certain claims (e.g. regarding disentanglement) are made without proper quantitative and/or qualitative investigation(s).
It is claimed that the proposed supervised disentanglement method improves performance over the unsupervised method. However, there are no comparisons with the closest unsupervised method (e.g. Miladinovic et al.). Therefore it is hard to judge whether the proposed supervised method truly performs better or not.
The results on OOD generalization can not be considered OOD as the parameters' range used to create OOD datasets highly overlap with the ranges of the training datasets (Table 1, Appendix).
I expand on these points in the following:
Regarding line 1 in the contributions section: The treatment of domain parameters as factors of variation is one of the main proposals of DSSM in Miladinovic et al. DSSM seeks to disentangle these true parameters (also referred to as domain-invariant state dynamics) from observations only. Therefore, I believe it's not the first work to consider this setting.
The main contribution of this work is the supervised disentanglement of sequential data. However, the authors do not investigate how good the disentanglement is. This leaves room for interpretation i.e. whether it is truly disentanglement that is helping the model in achieving good performance. I believe both quantitative and qualitative analysis of disentanglement would further strengthen the claims made in this paper.
The true parameters (factors of variation) are explicitly provided to the network for training. The method is not directly comparable to Locatello et al. 2019, as only a few labels were used in Locatello et al. 2019 which resulted in a semi-supervised disentanglement setting, in contrast to a fully supervised setting in this work.
The authors claim that the supervised disentanglement of the sequential model is better than the unsupervised disentanglement done in DSSM (Miladinovic et al.). I would appreciate it if the authors could back it up with some empirical evidence. It is important to compare results with DSSM (even Kalman VAE) to see the true benefits of supervision.
The method is practically limited as the privileged true parameter information is not readily available in real-world systems. Thus, as acknowledged by the authors, this method can only work for the simulated systems where these variables are known beforehand.
The ranges of the parameters used to create the OOD dataset highly overlaps with the ranges used to create the training dataset. In my opinion, this is not OOD as it is very likely that the test sample comes from the range which is used for training. I suggest authors use the ranges which are completely outside the ranges of the training distribution i.e. extrapolation generalization regime (or even interpolation regime where the parameters are sampled from the subset of the training range but that subset range is not seen during training).
Have the authors tried Gaussian distribution (correspondingly L2 loss) for the decoder? I wonder how the results might differ from the Laplace distribution.
The prediction quality is reported by using perceptual metrics LPIPS and SSIM. These metrics compare the deep feature space and statistical properties of the images respectively. I think these metrics are not sufficient for evaluating the predictions of the dynamic. If it is possible then kindly report RMSE and/or NLL.
Fig1 caption: Do the input-output dimensions differ? I don't think the labeling in the figure is correct as there are some inconsistencies. For e.g. in a single time step:
x
1
→
x
n
+
1
and
x
n
→
x
n
+
o
.
Typos (minor):
pg2: "This can be extend to"
pg2: "high-dimemnsional video rendering"
pg2: The sentence "we directly assess using the downstream prediction task." seems incomplete.
pg4: "though as equivalent to the phase space of the system."
pg4: check sentence structure of "being a state-of-the-art model in long-term video prediction,"
pg5: "on three well studies dynamical systems,"
pg9: "prediction is based both on them which" |
ICLR | Title
Disentangled generative models for robust dynamical system prediction
Abstract
Deep neural networks have become increasingly of interest in dynamical system prediction, but out-of-distribution generalization and long-term stability still remains challenging. In this work, we treat the domain parameters of dynamical systems as factors of variation of the data generating process. By leveraging ideas from supervised disentanglement and causal factorization, we aim to separate the domain parameters from the dynamics in the latent space of generative models. In our experiments we model dynamics both in phase space and in video sequences and conduct rigorous OOD evaluations 1. Results indicate that disentangled VAEs adapt better to domain parameters spaces that were not present in the training data. At the same time, disentanglement can improve the long-term and out-of-distribution predictions of state-of-the-art models in video sequences.2
1 INTRODUCTION
The robust prediction of dynamical systems behaviour remains an open question in machine learning, and engineering in general. The ability to make robust predictions is important not only for forecasting systems of interest like weather (Garg et al., 2021; Ravuri et al., 2021) but even more so because it enables innovations in fields like system control, autonomous planning (Hafner et al., 2018) and computer aided engineering (Brunton et al., 2020). In this context, the use of deep generative models has recently gained significant traction for sequence modelling (Girin et al., 2020).
Robustness of machine learning models can be considered along two axes: long-term prediction and out-of-distribution (OOD) performance. Accurate long-term prediction can be notoriously difficult in many dynamical systems, because error accumulation can diverge in finite time (Zhou et al., 2020; Raissi et al., 2019), a problem that even traditional solvers can suffer from. More importantly, machine learning techniques are known to suffer from poor OOD performance (Goyal & Bengio, 2020), i.e. when they are employed in a setting they had not encountered during the training phase.
Before addressing the OOD problem, we must first define what constitutes as OOD in dynamical systems. We start by the observation that even simple dynamical systems, i.e the swinging pendulum or the =-body system, can have multiple continuous parameters that affect their evolution. These parameters can be manifested as differential equation coefficients, boundary or initial conditions etc. Our starting point is to consider distinct ranges of those parameters as separate domains. Under this view, it becomes apparent why OOD prediction of dynamical systems can be hard: capturing the whole range of those parameters in a single training set is unrealistic (Fotiadis et al., 2020) and further inductive biases are required (Miladinović et al., 2019; Bird & Williams, 2019; Barber et al., 2021). From a dynamical system point of view, different parameters can produce widely different trajectories in phase space. A motivating example can be bifurcations which occur when a small change in the parameters of a system causes a sudden qualitative change in its behaviour.
We focus on the inductive bias of disentangled representations for which the dynamics are separated from the domain parameters. Many approaches based on the use of neural networks try to jointly learn the dynamics and the physical parameters, which results in convoluted representations and usually leads to overfitting (Bengio et al., 2012). System identification can be used to extract parameters, but
1Code for reproducing our experiments at: https://anonymous.4open.science/r/ dis-dyn-systems/
2Animated phase-space and video predictions are available at: https://bit.ly/dis-dyn-systems
requires knowledge of the underlying system to be computationally effective (Ayyad et al., 2020). We, instead, leverage advances in Variational Autoencoders (VAEs) (Kingma & Welling, 2014) that enable learning disentangled representations. Disentanglement enables different latent variables to focus on different factors of variation of the data distribution, and has been applied in the context of image generation (Higgins et al., 2017; Kim & Mnih, 2018). This can be extended to modelling dynamical systems by looking at disentanglement from a causal perspective: from all the generative models which can have the same marginal distribution, identify the one with the true causal factors. To map this idea to sequence modelling we treat the domain parameters of a dynamical system as factors of variation. Recent findings (Locatello et al., 2018) emphasize the vital role of inductive biases from models or data for useful disentanglement. Unsupervised disentanglement, based on the assumption of domain stationarity, is a promising direction (Miladinović et al., 2019; Li & Mandt, 2018). Nevertheless, this leaves a wealth of ground truth domain parameters, which can be cheaply collected in simulated datasets. This type of privileged information originating from simulations has been shown to be effective for domain adaptation in computer vision tasks (Sarafianos et al., 2017; Lee et al., 2018). We thus use supervised disentanglement (Locatello et al., 2019) by leveraging the ground truth domain parameters. To the best of our knowledge, using domain parameters information this way, has not been previously explored in the dynamical system prediction setting.
Contributions While others have treated domain parameters as factors of variation in the data distribution, our work is the first, to the best of our knowledge, that explicitly uses privileged information from simulated data to disentangle those domain parameters from dynamics in a supervised way. We furthermore conduct experiments both in the low-dimensional phase space of 3 dynamical systems and the high-dimensional video rendering of a swinging pendulum. Disentanglement has, in the past, been mostly applied to VAEs because they are easily amenable to it. We additionally apply disentanglement on a more powerful, hybrid, model with both stochastic and deterministic parts (Hafner et al., 2018). In doing so, we not only assess disentanglement on a generative model outside boundaries of VAEs but furthermore we do it on a model which is considered state-of-the-art in long-term video prediction (Saxena et al., 2021). In all cases, the prediction performance is assessed both in-distribution and also in OOD settings of increasing degrees of distribution shift. To our understanding, this is the first time such a rigorous OOD test is performed. Our results in phase-space demonstrate that disentangled models can better capture the variability of dynamical systems compared to baseline models both in-distribution and OOD. In modelling dynamics in video sequences, results indicate that disentanglement is beneficial both for long-term prediction and OOD prediction.
Limitations This work focuses on dynamical system prediction. While the results can potentially open up many applications in general time-series modelling, this is out of the scope of this work. We prioritize to empirically study OOD downstream task performance and the inspection of the disentangled representations with appropriate metrics is left out of scope in this work.
2 RELATED WORK
VAEs and disentanglement Disentanglement aims to produce representations where separate factors of variation in the data are encoded into independent latent components. This can be seen as finding the true causal model of the data. While supervised disentanglement in generative models is a long-standing idea (Mathieu et al., 2016), information-theoretic properties can be leveraged to allow unsupervised disentanglement in VAEs (Higgins et al., 2017; Kim & Mnih, 2018). The impossibility result from (Locatello et al., 2018) suggested that disentangled learning is only possible by inductive biases coming either from the model or the data. Hence, the focus shifted back to semi- or weaklysupervised disentanglement approaches (Locatello et al., 2019; 2020). While most of these methods focus on disentanglement metrics, we opt to directly assess using a downstream prediction task.
Disentanglement in sequence modelling While disentanglement techniques are mainly tested in a static setting, there is a growing interest in applying it to sequence dynamics. Using a bottleneck based on physical knowledge, Iten et al. (2018) learn an interpretable representation that requires conditioning the decoder on time, but it can return physically inconsistent predictions in OOD data (Barber et al., 2021). Deep state-space models (SSMs) have also employed techniques for disentangling content from dynamics (Fraccaro et al., 2017; Li & Mandt, 2018), but, focus mostly on modelling variations in the content, failing to take dynamics into account. In hierarchical approaches (Karl et al., 2017), different layers of latent variables correspond to different timescales: for example,
in speech analysis for separating voice characteristics and phoneme-level attributes (Hsu et al., 2017). In an approach similar to our work, Miladinović et al. (2019) separate the dynamics from sequencewide properties in dynamical systems like Lotka-Volterra, but do so in an unsupervised way which dismisses a wealth of cheap information and only assesses OOD generalization in a limited way.
Feed-forward models for sequence modelling Deep SSM models are difficult to train as they require non-trivial inference schemes and a careful design of the dynamic model (Krishnan et al., 2015; Karl et al., 2017). Feed-forward models, with necessary inductive biases, have been used for sequence modelling in dynamical systems (Greydanus et al., 2019; Fotiadis et al., 2020). In works like Hamiltonian Neural Networks Greydanus et al. (2019) the domain is fixed; together with Barber et al. (2021), our work is an attempt in tackling domain variability.
Privileged information for domain adaptation. Using privileged information during training has been shown to help with domain adaptation in computer vision tasks. Using segmentation masks of simulated urban scenery can improve semantic segmentation on the target domain (Lee et al., 2018), while clip art data can help with domain transfer in an action recognition task (Sarafianos et al., 2017).
3 METHODS
3.1 VARIATIONAL AUTOENCODERS
Variational autoencoders (VAEs) (Kingma & Welling, 2014) offer a principled approach to latent variable modeling by combining a variational inference model @q (z |x) with a generative model ?\ (x|z). As in other approximate inference methods, the goal is to maximize the evidence lower bound (ELBO) over the data:
Lq,\ (x) = E@q (z |x) [log ?\ (x | z)] − ! (@q (z | x) | |?(z)) (1)
The first part of the ELBO is the reconstruction loss (in our case the prediction loss) and the second part is the Kullback-Leibler divergence that quantifies how close the posterior is to the prior.
Design choices for the model We use an isotropic unit Gaussian prior ?(z) = N(z | 0, I) which helps to disentangle the learned representation (Higgins et al., 2017). The approximate posterior (encoder) distribution is a Gaussian with diagonal covariance @q (z | x) = N (z | µI , I), allowing a closed form KL-divergence, while the decoder has a Laplace distribution ?\ (x | z) = Laplace (x | µG , WI) with constant diagonal covariance W > 0, which is tuned empirically. This leads to an !1 loss that provides improved results in some problems (Mathieu et al., 2018) and empirically works better in our case. The parameters µI ≡ µI (x; q), I ≡ diag [σI (x; q)]2, and µG ≡ µG (z; \) are computed via feed-forward neural networks.
3.2 DISENTANGLEMENT OF DOMAIN PARAMETERS IN LATENT SPACE
Apart from the disentanglement that stems from the choice of prior ?(z), we explicitly disentangle part of latent space so that it corresponds to the domain parameters of each input sequence. We achieve this by using a regression loss term Lξ (z1:: , ξ) between the ground truth factors of the domain parameters ξ ∈ R: and the output of the corresponding latents, z1:: . We, empirically, opted for an !1 loss, corresponding to a Laplacian prior with mean ξ and unitary covariance. Previous methods have reported that binary cross-entropy works better than !2 (Locatello et al., 2019) but this does not fit well in a setting like ours. We hypothesize that BCE works better because of the implicit scaling. To address this, we propose applying a function G(`I8 ) which linearly scales the `I8 between the min and max values of the corresponding factor of variation:
G ( `I8 ) = `I8 · (max(b8) −min(b8)) +min(b8) (2)
where b8 are the domain parameters and their corresponding minimum and maximum values of domain parameters from the training set. In all cases, the regression term is weighted by a parameter X which is empirically tuned. Plugging these choices in results in the following loss function:
Lq,\ (x) =E@q (z |x1:=) [ 1 W ‖x=+1:=+> − µG (z; \)‖1 ] + 3 log W (Prediction loss)
+ ‖σI (x1:=; q)‖22 − log diag [σI (x1:=; q)]2 + ‖µI (x1:=; q)‖22 (KL-Divergence) (3) + X ξG − G (µI1:: (x1:=; q)) 1} (Sup. disentangl. loss)
Using the reparameterization trick (Kingma & Welling, 2014), the loss is amenable to optimzation by stochastic gradient descent, with batch size =. The model architecture can be seen in Figure 1(left).
3.3 DISENTANGLEMENT FOR VIDEO DYNAMICS
We further investigate the effect of disentanglement in video sequence dynamics. To this end, two generative models are used. The first is derived from the VAE formulation of the previous section and is called CNN-VAE and is similar to the VAE with the addition of a convolutional encoder and a decoder. The encoder projects the input frames down to a low-dimensional space which can be thought as equivalent to the phase space of the system. A VAE is applied in this projection to predict in the future coordinates in the ”phase space”. The decoder then maps the predictions of the VAE back to pixel space. The schematic of the model can be seen in Figure 1(right).
The second model we use is the Recurrent State Space Model (RSSM) which has been successfully used for planning (Hafner et al., 2018). Since RSSM is a hybrid model combining deterministic and variational components, it allows us to assess disentanglement outside the limited scope of VAEs. Furthermore, using a state-of-the-art model in long-term video prediction (Saxena et al., 2021), allows
us to identify the limits of applying disentanglement in competitive models. The loss function we use shares the same formulation as in the original work of Hafner et al. (2018) with the addition of the supervised disentanglement loss. Since in the RSSM formulation there are latent variables for each time-step, we apply a disentanglement loss on all of them, which empirically is set to be !2:
L'(("−( = )∑ C=1 ©«E@ (st |o≤t [ln ?(ot | st )]︸ ︷︷ ︸reconstruction −E@ (st−1 |o≤t−1) [KL[@(st | o≤t )‖?(st | st−1)]]︸ ︷︷ ︸prediction + XE@ (st |o≤t )
[ ξ − s(1::)t 2]︸ ︷︷ ︸ supervised disentanglement loss ª®®®®®¬ (4)
Where ot is the observations, st the stochastic latent variables at time C, ξ are the : dimensional domain parameters and X tunes the supervised disentanglement strength.
4 EXPERIMENT - ODE PHASE SPACE DYNAMICS
4.1 DATASETS
In the phase-space experiments we compare the models on three well studied dynamical systems, the swinging pendulum, the Lotka-Volterra equations used to model prey-predator populations, and the planar 3-body system:
Under review as a conference paper at ICLR 2022
1.0 0.5 0.0 0.5 1.0
2
1
0
1
2
Pendulum
1.0 0.5 0.0 0.5 1.0
2
1
0
1
2
Pendulum
3 4 5 Prey
1.5
2.0
2.5
3.0
Pr ed
at or
Lotka-Volterra
3 4 5 Prey
1.5
2.0
2.5
3.0
Pr ed
at or
Lotka-Volterra
1 0 1 x
1.5
1.0
0.5
0.0
0.5
1.0
y
3-body system
1 0 1 x
1.5
1.0
0.5
0.0
0.5
1.0
y
3-body system
Ground truth MLP VAE VAE-SD Noisy input
The systems where chosen for varied complexity in terms of degrees of freedom, number of ODE equations and factors of variation. For the pendulum we consider one factor of variation, its length ;; Lotka-Volterra has 4 factors of variation U, V, W, X and the 3-body system has also 4 factors of variation 1, <1, <2, <3. Factors are drawn uniformly from a predetermined range which is the same between the training, validation and test sets. To further assess the OOD prediction accuracy, we create two additional test sets with factor values outside of the original range. We denote these datasets as OOD Test-set Easy and Hard, representing a smaller and bigger shift from the original range. As a visual example, the distribution of the factors of variation for the Lotka-Volterra system is illustrated in Figure 9 of the Appendix. The data were additionally corrupted with Gaussian noise. Dataset details can be found on Table 1 of the Appendix.
4.2 MODELS AND TRAINING
The main goal of these experiments is to assess whether OOD prediction can be improved by disentangling dynamical system parameters in the latent space of VAEs. We opt to use simple models to allow more experiments and comparisons. Our main baseline is the VAE upon which we propose two enhancements that leverage supervised disentanglement. The first VAE model, termed VAE-SD uses supervised disentanglement without a scaling function while the second model termed VAE-SSD uses an additional linear scaling function G(`I8 ) for the latent variable mean vector `I8 , as described in Section 3.2. Another baseline is a multilayer perceptron (MLP) autoencoder which allows comparison with a deterministic counterpart of the VAE. We also use supervised disentanglement on the latent neurons of the MLP, a model we refer to as MLP-SD. This enables us to assess if the privileged information can improve deterministic models. Lastly, we include an LSTM model, a popular choice for low dimensional sequence modelling (Yu et al., 2019), as a representative recurrent method.
Early experiments revealed a significant variance on the performance of the models, depending on hyperparameters. Under these conditions, we took various steps to make model comparisons as fair as possible. Firstly, all models have similar capacity in terms of neuron count. Secondly, we tune various hyperparameter dimensions, some of which are shared, while others are model-specific. Third, we conduct a thorough grid search on the hyperparameters to avoid undermining a model (details can be found in Tables 3, 4 and 5 of the Appendix). Lastly, we train the same number of experiments for all models which amounts to 1440 trained models in total, as summarized in Table 2 of the Appendix.
SSIM
0 200 400 600 800 Timestep
0.80
0.85
0.90
0.95
1.00 Test-set
RSSM RSSM-SD CNN-VAE CNN-VAE-SD
0 200 400 600 800 Timestep
0.80
0.85
0.90
0.95
1.00 OOD Test-set Easy
0 200 400 600 800 Timestep
0.80
0.85
0.90
0.95
OOD Test-set Hard
PSNR
4.3 RESULTS
For each dynamical system we focus on the performance on the three test-sets, the in-distribution test set, which shares the same parameter distribution with the training set, and the two OOD test-sets (Easy and Hard), which represent an increasing parameter shift from the training data. Models are compared on the cumulative Mean Absolute Error (MAE) between prediction and ground truth for the first 200 time-steps. We consider this to be sufficiently long-term as it is at least 20 times longer than the prediction horizon used during training. Long predictions are obtained by re-feeding the model predictions back as input. This approach has been shown to work well in systems where the dynamics are locally deterministic (Fotiadis et al., 2020). A summary of the quantitative results can be found in Figures 3 & 4 and Table 8. To account for the variability in the results, we present a summary of the best 5 runs of each model, selected by validation MAE. We generally observe that model performance is correlated to the distribution shift of test-sets, and this is consistent for all systems and models. The MAE is increasing as we move from the in-distribution test-set to the OOD Easy and Hard test-sets. Nevertheless, not all models suffer equally from the OOD performance drop.
Comparing the VAEs (Figure 3), we see that disentangled VAE models offer a substantial and consistent improvement over the VAE across all 3 dynamical systems. The improvement is more pronounced for the OOD test-sets where the distribution shift is greater, a strong indication that disentanglement of domain parameters is an inductive bias that can lead to better generalization. We also observe that VAE-SSD models the in-distribution data better that VAE-SD. This seems to come at a slight overfitting cost, because the VAE-SD provides better OOD extrapolation in most cases. This could be explained because the scaling function is dependent on min and max values of the factors of the training set. The extra information allows the model to better capture the training data but sacrifices some generalization capacity.
On the other hand, disentanglement results for the MLP are mixed. While in-distribution MLP-SD offers better results than the plain MLP, on the OOD test-sets, MLP-SD only performs favourably in the pendulum data. Furthermore in Lotka-Volterra, MLP-SD models are very unstable, and this is a drawback that affects some VAE-SD model too (see Table 9 of the Appendix). Probabilistic models seem better suited to capture the variation in the data. The contrast between VAE-SD and MLP-SD illustrates that making use of privileged information and latent space disentanglement are not trivial
and more work is needed to help us understand what works in practice and why. Lastly, the LSTM (Figure 11 & Table 8 of the Appendix) is only comparable in the pendulum dataset and only for small OOD shifts. Qualitative predictions can be found in Figure 5.
5 EXPERIMENT - VIDEO SEQUENCE DYNAMICS
In the first experiment we assessed supervised disentanglement for phase space prediction, where the states of the input trajectories are fully observable and only the domain parameters are unknown. This experiment extends the idea of supervised disentanglement to pixel-space input and output, where the physical states have to be inferred by the model.
5.1 DATASETS
The dynamical system we use in this experiment is the swinging pendulum, a common benchmark for modelling dynamics in video sequences (Brunton et al., 2020; Barber et al., 2021). We consider 4 factors of variation, the length ;, gravity 6 and initial angle \ and angular velocity l. Factors are drawn uniformly from a predetermined range. As before, we create a test-set and two additional OOD test-sets (Easy and Hard). The OOD sets have length and gravity values outside of the original range, while the initial conditions \, l are drawn from the same distribution. The distribution of the factors of variation for the test-sets is illustrated in Figure 2. The trajectories are first computed in phase space using a numerical simulator and then rendered as video frames of 64 × 64 pixels. More details about the dataset can be found in Section A.2 of the Appendix.
5.2 MODELS AND TRAINING
In this experiment we use two different models CNN-VAE and RSSM. CNN-VAE is described in Section 3.3 and architectural details can be found in Section B.2.1. During training the CNN-VAE the inner VAE is recursively used to predict, the number of recursions being a hyperparameter (Table 6 of the Appendix). We found that this type of training leads to more stable long term predictions. In total, 48 CNN-VAE models are trained half of which are with supervised disentanglement (CNN-VAE-SD). The RSSM model is a generative model including both a stochastic and deterministic component. We only use supervised disentanglement on the stochastic part, and term that model RSSM-SD. Disentanglement is applied all four factors of variation of the domain, despite only length and gravity varying between datasets. Detailed architectural and training details can be found in Section B.2.2 of the Appendix.
5.3 RESULTS
Figure 7 shows the quality of predictions on video pendulum on two metrics: structural similarity (SSIM) and peak signal-to-noise ratio (PSNR) as a function of predicted time distance. We select the models which have the best cumulative metrics over the first 800 timesteps on a validation set.
For the CNN-VAE, effects of disentanglement are more subtle. We observe that, in-distribution, the disentangled CNN-VAE-SD has very similar quality when compared to the CNN-VAE. For the OOD
datasets, though, disentanglement offers improved long-term predictions. The improvement is more noticeable on the OOD test-sets, indicating that disentanglement can help with OOD robustness. For RSSM, we first note that both models perform significantly better than the CNN-VAE models, which is expected since they are considered competitive in long-term video prediction. Disentanglement in RSSM seems to produce a trade-off. The plain RSSM model better in short-term prediction but its performance deteriorates with time, reaching VAE-CNN levels in all metrics. On the other hand, the RSSM-SD model provides the best long-term scores in all metrics and all datasets. Qualitative results in Figure 5 show that almost all models produce accurate short time predictions (approximately up to 200 time-steps). This further stresses the importance of disentanglement for long-term performance. In terms of OOD robustness, disentanglement also appears to be helping. While the RSSM-SD model lacks in short-term prediction quality on the in-distribution test-set, this performance gap closes as the OOD test-sets get harder. More specifically, on the in-distribution test-set the RSSM-SD overtakes RSSM in SSIM after around 400 frames, while in the OOD Easy and Hard test sets, this happens around 350 and 250 time-steps respectively. This narrowing gap indicates robustness improves with increasing distribution shifts. The above findings are corroborated by LPIPS (Zhang et al., 2018) comparisons (Figure 13 and Table 10 of the Appendix). Furthermore, the qualitative results show that all models accurately capture the appearance of the pendulum even long-term. Where they differ is on how well they capture the dynamics of the pendulum movement. This could offer an explanation why disentangling the domain from the dynamics is important, and why in practice offers better long-term and out-of-distribution performance.
Overall, experiments suggest that supervised disentanglement can be used to model dynamical systems in video sequences, resulting in improved long-term and OOD performance.
6 CONCLUSIONS
Using supervised disentanglement of domain parameters in generative models is a promising avenue for improving robustness. Our experiments show that it can improve both OOD generalization and long-term prediction of dynamical systems. This was demonstrated in phase-space with VAEs and also in video sequence modelling with state-of-the-art RSSMs.
By treating the domain parameters as factors of variation of the data and applying supervised disentanglement, several inductive biases are potentially enforced. First, the model in addition to prediction also performs “soft” system identification which acts as a regularizer. Second, it creates an implicit hierarchy such that some latent variables correspond to sequence-wide domain parameters and the rest capture instant dynamics. We speculate that this could additionally make the latent space more interpretable. Third, if the model can correctly extract the parameters this mean that the prediction is based on both of them which is closer to how numerical integrators work, where the domain is known. All of these could lead the model to learn the correct causal structure of the data. Nevertheless, using privileged information for OOD robustness is not always straightforward and requires further exploration. This is evident by the results of the MLP autoencoders which do not yield as consistent improvements. A criticism of our method could be that cheap privileged information is not always available and/or depends on using simulated data. Firstly, training on simulations is an increasingly appealing option because it is a cheap way to generate data to begin with. This is, also, clearly demonstrated by the many advancements on techniques like sim2real (Peng et al., 2017) that try to bring models trained in simulated data to the real world. So there seems to be no reason not to use the privileged information that comes with simulated data. Under that light supervised disentanglement can provide a pathway for real world applications where robustness in dynamical system prediction is critical. Applying the method to other datasets where there are more complex dynamic can increase its relevance. Sequence-wide parameters could also be exploited through self-supervision.
REPRODUCIBILITY STATEMENT
We provide all the necessary code to reproduce our experiments at the anonymous repo https: //anonymous.4open.science/r/dis-dyn-systems (will be de-anonymized after the review process). The repo contains code for generating all the datasets from scratch and also code for training all the models presented in this work. The README also contains instructions on how to train the models. The hyperparameters we have used are clearly and thoroughly presented in the
Appendix. These steps should significantly help others reproduce our experiments. For any further clarifications, you are encouraged to contact the corresponding author(s).
A DATASETS
A.1 PHASE SPACE
For simulations, we use an adaptive Runge-Kutta integrator with a timestep of 0.01 seconds. Each simulated sequence has a different combination of factors of variation. Simulation of the pendulum uses an initial angle \ which is randomly between 10> − 170> while the angular velocity l is 0. For the other two systems the initial conditions are always the same to avoid pathological configurations.
A.2 VIDEO PENDULUM
This data set contains image sequences of a moving pendulum under different conditions. The positions of the pendulum are first computed by a numerical simulator and then rendered in pixel space as frames of dimension 64 × 64. An example image sequence is shown in Figure 10. For the simulations, we use an adaptive Runge-Kutta integrator with a timestep of 0.05 seconds. The length of the pendulum, the strength of gravity and the initial conditions (position, momentum) are set to different values so that each trajectory slightly differs from the others. The initial angle and initial velocity are drawn from the same uniform distribution for all data sets. The initial angle ranges from 30◦ to 170◦ and the initial velocity ranges from −2A03/B to 2A03/B. For training, validation and in-distribution testing set, the gravity ranges from 8.0<2/B to 12.0<2/B, and the pendulum length ranges from 1.20< to 1.40<. In the easy out-of-distribution testing set, the gravity is between 12.0 − 12.5<2/B and the pendulum length is between 1.40 − 1.45<, while in the hard out-of-distribution testing set, the gravity is 12.5−13.0<2/B and the pendulum length is 1.45−1.50<. The distributions of these parameters are shown in Figure 2.
B TRAINING AND HYPERPARAMETERS
B.1 PHASE SPACE
During training the back-propagation is used after a single forward pass. The input and output of the models are smaller than the sequence size, so to cover the whole sequence we use random starting points per batch, both during training and testing. Both the VAE and MLP AE have an encoder with two hidden layers size 400,200 and a reverse decoder. The LSTM model has two stack LSTM cells with hidden size of 100, which results on an equivalent number of neurons. We used the Adam optimizer with 11 = 0.9 and 12 = 0.999. A scheduler for the learning rate was applied whose patience and scaling factor are hyperparameters. Maximum number of epochs was set to 2000 but we employed also early stopping using a validation set which led to significantly less epochs.
Table 5: 3-body system hyperparameters
MLP MLP-SD VAE VAE-SD LSTM
Input Size 50 Output Size 10 Hidden Layers [400, 200] 50,100 Latent Size 8, 16, 32 - Nonlinearity Leaky ReLU Sigmoid Learning rate 10−3, 10−4 10−3, 10−4 10−3, 10−4 10−3 Batch size 16, 32 16 16 16 16, 64, 128 Sched. patience 30, 40, 50, 60 30, 40, 50, 60 30, 40, 50, 60 30, 40, 50, 60 20, 30 Sched. factor 0.3, 0.4 0.3 0.3, 0.4 0.3, 0.4 0.3 Gradient clipping No No No No No Layer norm (latent) No No No No No Decoder W - - 10−5, 10−6 10−5, 10−6 - Sup. scaling - Linear - Linear - Supervision X - 0.05, 0.1, 0.2, 0.3 - 0.1, 0.2 -
# of experiments 96 96 96 96 96
B.2 VIDEO PENDULUM
B.2.1 CNN-VAE MODEL
Encoder has 4 layers convolutional layers with 32, 32, 64 and 64 maps respectively. The filter size is 3, padding is 1 and stride is 2. The last convolutional layer is flattened as a 256-dimensional vector to become the inner VAE input. The decoder 4 convolutional layers (64,64,32,32) with bi-linear upsampling. Model input and out is 20 frames. For the models without supervised disentanglement, a grid search is performed upon the V value, the size of the latent space, and the roll-out length during training. For the models with supervised disentanglement, a grid search is performed upon the V value, the size of the latent space, the time step of the data set, the roll-out length during training and the supervision multiplier. The detailed search grid is summarised in Table 6. Learning rate was 10−3 and an Adam optimizer (11 = 0.9 and 12 = 0.999) was used. We also used early stopping upon the cumulative reconstruction loss for the first 200 steps on a validation set with the max number of epochs being 1000.
B.2.2 RSSM MODELS
For the RSSM model we follow the architecture parameters as described in Hafner et al. (2018) & Saxena et al. (2021). For training we use sequences of 100 frames and batch size 100. All models were trained for 300 epochs with a learning rate of 10−3 and an Adam optimizer (11 = 0.9 and 12 = 0.999). During testing the model uses 50 frames as context (input). The parameters we tune appear in Table 7.
C PHASE SPACE RESULTS
Under review as a conference paper at ICLR 2022
1 0 1
3
2
1
0
1
2
3
Pendulum
1 0 1
3
2
1
0
1
2
3
Pendulum
3 4 5 Prey
1.5
2.0
2.5
3.0
Pr ed
at or
Lotka-Volterra
3 4 5 Prey
1.5
2.0
2.5
3.0
Pr ed
at or
Lotka-Volterra
1 0 1 x
1.5
1.0
0.5
0.0
0.5
1.0
y
3-body system
1 0 1 x
1.5
1.0
0.5
0.0
0.5
1.0
y
3-body system
Ground truth MLP VAE VAE-SD Noisy input
Figure 12: Model predictions in phase space. Trajectories are taken from the OOD Test-Set Hard of each system. The model input is noisy. The circle and bold ‘×’ markers denote the start and end of the ground truth trajectories respectively.
D VIDEO PENDULUM RESULTS
LPIPS | 1. What is the focus of the paper regarding dynamical systems and OOD evaluations?
2. What are the strengths of the paper, particularly in its presentation and testing?
3. What are the weaknesses of the paper, especially regarding its claims and experiments?
4. How does the reviewer assess the improvement of models with disentanglement in the phase space setting?
5. How can the performance variance in the video prediction metric be assessed?
6. What insights could be gained from analyzing the type or class of errors reduced by disentanglement?
7. Why did the authors not vary the time step, a potentially crucial hyperparameter? | Summary Of The Paper
Review | Summary Of The Paper
The paper studies the performance of dynamical systems learned from data with a focus on out of distribution (OOD) evaluations. Authors consider the question whether disentangling dynamical system parameters in the latent space can improve the generalization of the models, which is perceived as privileged information available from the reference (ground truth) simulations. Authors carry out experiments on several dynamical systems: pendulum, Lotka-Volterra system and three-body problem. Additionally an experiment on video prediction of a singing pendulum is performed. Authors found that additional disentanglement can improve generalization performance of the models and in video prediction setting leads to better long-term predictions based on structural and perceptual image metrics.
Review
Strengths:
Clear statement of the underlying hypothesis being tested
Clear presentation of the results and supporting information
Extensive sweeps over hyperparameters
Weaknesses:
Improvement of the models with disentanglement in the phase space setting appear marginal; Based on the provided visualizations it is not clear that there is a systematic way in which models with disentanglement perform better. A more expressive analysis of the errors might be helpful to assess this aspect (maybe distribution of errors across the dataset for several fixed samples?)
It’s hard to assess how much variance in the performance is present in the video prediction metric; This is general challenge with selecting best performing models, as they completely mask away the error bars; (Providing several model instances would help to evaluate the significance better)
While marginal improvements are presented in coarse performance metrics, an insight into the type/class of errors that are being reduced would be very interesting.
One potentially important hyperparameter (time step) was not varied, which often significantly affects the prediction accuracy. |
ICLR | Title
Disentangled generative models for robust dynamical system prediction
Abstract
Deep neural networks have become increasingly of interest in dynamical system prediction, but out-of-distribution generalization and long-term stability still remains challenging. In this work, we treat the domain parameters of dynamical systems as factors of variation of the data generating process. By leveraging ideas from supervised disentanglement and causal factorization, we aim to separate the domain parameters from the dynamics in the latent space of generative models. In our experiments we model dynamics both in phase space and in video sequences and conduct rigorous OOD evaluations 1. Results indicate that disentangled VAEs adapt better to domain parameters spaces that were not present in the training data. At the same time, disentanglement can improve the long-term and out-of-distribution predictions of state-of-the-art models in video sequences.2
1 INTRODUCTION
The robust prediction of dynamical systems behaviour remains an open question in machine learning, and engineering in general. The ability to make robust predictions is important not only for forecasting systems of interest like weather (Garg et al., 2021; Ravuri et al., 2021) but even more so because it enables innovations in fields like system control, autonomous planning (Hafner et al., 2018) and computer aided engineering (Brunton et al., 2020). In this context, the use of deep generative models has recently gained significant traction for sequence modelling (Girin et al., 2020).
Robustness of machine learning models can be considered along two axes: long-term prediction and out-of-distribution (OOD) performance. Accurate long-term prediction can be notoriously difficult in many dynamical systems, because error accumulation can diverge in finite time (Zhou et al., 2020; Raissi et al., 2019), a problem that even traditional solvers can suffer from. More importantly, machine learning techniques are known to suffer from poor OOD performance (Goyal & Bengio, 2020), i.e. when they are employed in a setting they had not encountered during the training phase.
Before addressing the OOD problem, we must first define what constitutes as OOD in dynamical systems. We start by the observation that even simple dynamical systems, i.e the swinging pendulum or the =-body system, can have multiple continuous parameters that affect their evolution. These parameters can be manifested as differential equation coefficients, boundary or initial conditions etc. Our starting point is to consider distinct ranges of those parameters as separate domains. Under this view, it becomes apparent why OOD prediction of dynamical systems can be hard: capturing the whole range of those parameters in a single training set is unrealistic (Fotiadis et al., 2020) and further inductive biases are required (Miladinović et al., 2019; Bird & Williams, 2019; Barber et al., 2021). From a dynamical system point of view, different parameters can produce widely different trajectories in phase space. A motivating example can be bifurcations which occur when a small change in the parameters of a system causes a sudden qualitative change in its behaviour.
We focus on the inductive bias of disentangled representations for which the dynamics are separated from the domain parameters. Many approaches based on the use of neural networks try to jointly learn the dynamics and the physical parameters, which results in convoluted representations and usually leads to overfitting (Bengio et al., 2012). System identification can be used to extract parameters, but
1Code for reproducing our experiments at: https://anonymous.4open.science/r/ dis-dyn-systems/
2Animated phase-space and video predictions are available at: https://bit.ly/dis-dyn-systems
requires knowledge of the underlying system to be computationally effective (Ayyad et al., 2020). We, instead, leverage advances in Variational Autoencoders (VAEs) (Kingma & Welling, 2014) that enable learning disentangled representations. Disentanglement enables different latent variables to focus on different factors of variation of the data distribution, and has been applied in the context of image generation (Higgins et al., 2017; Kim & Mnih, 2018). This can be extended to modelling dynamical systems by looking at disentanglement from a causal perspective: from all the generative models which can have the same marginal distribution, identify the one with the true causal factors. To map this idea to sequence modelling we treat the domain parameters of a dynamical system as factors of variation. Recent findings (Locatello et al., 2018) emphasize the vital role of inductive biases from models or data for useful disentanglement. Unsupervised disentanglement, based on the assumption of domain stationarity, is a promising direction (Miladinović et al., 2019; Li & Mandt, 2018). Nevertheless, this leaves a wealth of ground truth domain parameters, which can be cheaply collected in simulated datasets. This type of privileged information originating from simulations has been shown to be effective for domain adaptation in computer vision tasks (Sarafianos et al., 2017; Lee et al., 2018). We thus use supervised disentanglement (Locatello et al., 2019) by leveraging the ground truth domain parameters. To the best of our knowledge, using domain parameters information this way, has not been previously explored in the dynamical system prediction setting.
Contributions While others have treated domain parameters as factors of variation in the data distribution, our work is the first, to the best of our knowledge, that explicitly uses privileged information from simulated data to disentangle those domain parameters from dynamics in a supervised way. We furthermore conduct experiments both in the low-dimensional phase space of 3 dynamical systems and the high-dimensional video rendering of a swinging pendulum. Disentanglement has, in the past, been mostly applied to VAEs because they are easily amenable to it. We additionally apply disentanglement on a more powerful, hybrid, model with both stochastic and deterministic parts (Hafner et al., 2018). In doing so, we not only assess disentanglement on a generative model outside boundaries of VAEs but furthermore we do it on a model which is considered state-of-the-art in long-term video prediction (Saxena et al., 2021). In all cases, the prediction performance is assessed both in-distribution and also in OOD settings of increasing degrees of distribution shift. To our understanding, this is the first time such a rigorous OOD test is performed. Our results in phase-space demonstrate that disentangled models can better capture the variability of dynamical systems compared to baseline models both in-distribution and OOD. In modelling dynamics in video sequences, results indicate that disentanglement is beneficial both for long-term prediction and OOD prediction.
Limitations This work focuses on dynamical system prediction. While the results can potentially open up many applications in general time-series modelling, this is out of the scope of this work. We prioritize to empirically study OOD downstream task performance and the inspection of the disentangled representations with appropriate metrics is left out of scope in this work.
2 RELATED WORK
VAEs and disentanglement Disentanglement aims to produce representations where separate factors of variation in the data are encoded into independent latent components. This can be seen as finding the true causal model of the data. While supervised disentanglement in generative models is a long-standing idea (Mathieu et al., 2016), information-theoretic properties can be leveraged to allow unsupervised disentanglement in VAEs (Higgins et al., 2017; Kim & Mnih, 2018). The impossibility result from (Locatello et al., 2018) suggested that disentangled learning is only possible by inductive biases coming either from the model or the data. Hence, the focus shifted back to semi- or weaklysupervised disentanglement approaches (Locatello et al., 2019; 2020). While most of these methods focus on disentanglement metrics, we opt to directly assess using a downstream prediction task.
Disentanglement in sequence modelling While disentanglement techniques are mainly tested in a static setting, there is a growing interest in applying it to sequence dynamics. Using a bottleneck based on physical knowledge, Iten et al. (2018) learn an interpretable representation that requires conditioning the decoder on time, but it can return physically inconsistent predictions in OOD data (Barber et al., 2021). Deep state-space models (SSMs) have also employed techniques for disentangling content from dynamics (Fraccaro et al., 2017; Li & Mandt, 2018), but, focus mostly on modelling variations in the content, failing to take dynamics into account. In hierarchical approaches (Karl et al., 2017), different layers of latent variables correspond to different timescales: for example,
in speech analysis for separating voice characteristics and phoneme-level attributes (Hsu et al., 2017). In an approach similar to our work, Miladinović et al. (2019) separate the dynamics from sequencewide properties in dynamical systems like Lotka-Volterra, but do so in an unsupervised way which dismisses a wealth of cheap information and only assesses OOD generalization in a limited way.
Feed-forward models for sequence modelling Deep SSM models are difficult to train as they require non-trivial inference schemes and a careful design of the dynamic model (Krishnan et al., 2015; Karl et al., 2017). Feed-forward models, with necessary inductive biases, have been used for sequence modelling in dynamical systems (Greydanus et al., 2019; Fotiadis et al., 2020). In works like Hamiltonian Neural Networks Greydanus et al. (2019) the domain is fixed; together with Barber et al. (2021), our work is an attempt in tackling domain variability.
Privileged information for domain adaptation. Using privileged information during training has been shown to help with domain adaptation in computer vision tasks. Using segmentation masks of simulated urban scenery can improve semantic segmentation on the target domain (Lee et al., 2018), while clip art data can help with domain transfer in an action recognition task (Sarafianos et al., 2017).
3 METHODS
3.1 VARIATIONAL AUTOENCODERS
Variational autoencoders (VAEs) (Kingma & Welling, 2014) offer a principled approach to latent variable modeling by combining a variational inference model @q (z |x) with a generative model ?\ (x|z). As in other approximate inference methods, the goal is to maximize the evidence lower bound (ELBO) over the data:
Lq,\ (x) = E@q (z |x) [log ?\ (x | z)] − ! (@q (z | x) | |?(z)) (1)
The first part of the ELBO is the reconstruction loss (in our case the prediction loss) and the second part is the Kullback-Leibler divergence that quantifies how close the posterior is to the prior.
Design choices for the model We use an isotropic unit Gaussian prior ?(z) = N(z | 0, I) which helps to disentangle the learned representation (Higgins et al., 2017). The approximate posterior (encoder) distribution is a Gaussian with diagonal covariance @q (z | x) = N (z | µI , I), allowing a closed form KL-divergence, while the decoder has a Laplace distribution ?\ (x | z) = Laplace (x | µG , WI) with constant diagonal covariance W > 0, which is tuned empirically. This leads to an !1 loss that provides improved results in some problems (Mathieu et al., 2018) and empirically works better in our case. The parameters µI ≡ µI (x; q), I ≡ diag [σI (x; q)]2, and µG ≡ µG (z; \) are computed via feed-forward neural networks.
3.2 DISENTANGLEMENT OF DOMAIN PARAMETERS IN LATENT SPACE
Apart from the disentanglement that stems from the choice of prior ?(z), we explicitly disentangle part of latent space so that it corresponds to the domain parameters of each input sequence. We achieve this by using a regression loss term Lξ (z1:: , ξ) between the ground truth factors of the domain parameters ξ ∈ R: and the output of the corresponding latents, z1:: . We, empirically, opted for an !1 loss, corresponding to a Laplacian prior with mean ξ and unitary covariance. Previous methods have reported that binary cross-entropy works better than !2 (Locatello et al., 2019) but this does not fit well in a setting like ours. We hypothesize that BCE works better because of the implicit scaling. To address this, we propose applying a function G(`I8 ) which linearly scales the `I8 between the min and max values of the corresponding factor of variation:
G ( `I8 ) = `I8 · (max(b8) −min(b8)) +min(b8) (2)
where b8 are the domain parameters and their corresponding minimum and maximum values of domain parameters from the training set. In all cases, the regression term is weighted by a parameter X which is empirically tuned. Plugging these choices in results in the following loss function:
Lq,\ (x) =E@q (z |x1:=) [ 1 W ‖x=+1:=+> − µG (z; \)‖1 ] + 3 log W (Prediction loss)
+ ‖σI (x1:=; q)‖22 − log diag [σI (x1:=; q)]2 + ‖µI (x1:=; q)‖22 (KL-Divergence) (3) + X ξG − G (µI1:: (x1:=; q)) 1} (Sup. disentangl. loss)
Using the reparameterization trick (Kingma & Welling, 2014), the loss is amenable to optimzation by stochastic gradient descent, with batch size =. The model architecture can be seen in Figure 1(left).
3.3 DISENTANGLEMENT FOR VIDEO DYNAMICS
We further investigate the effect of disentanglement in video sequence dynamics. To this end, two generative models are used. The first is derived from the VAE formulation of the previous section and is called CNN-VAE and is similar to the VAE with the addition of a convolutional encoder and a decoder. The encoder projects the input frames down to a low-dimensional space which can be thought as equivalent to the phase space of the system. A VAE is applied in this projection to predict in the future coordinates in the ”phase space”. The decoder then maps the predictions of the VAE back to pixel space. The schematic of the model can be seen in Figure 1(right).
The second model we use is the Recurrent State Space Model (RSSM) which has been successfully used for planning (Hafner et al., 2018). Since RSSM is a hybrid model combining deterministic and variational components, it allows us to assess disentanglement outside the limited scope of VAEs. Furthermore, using a state-of-the-art model in long-term video prediction (Saxena et al., 2021), allows
us to identify the limits of applying disentanglement in competitive models. The loss function we use shares the same formulation as in the original work of Hafner et al. (2018) with the addition of the supervised disentanglement loss. Since in the RSSM formulation there are latent variables for each time-step, we apply a disentanglement loss on all of them, which empirically is set to be !2:
L'(("−( = )∑ C=1 ©«E@ (st |o≤t [ln ?(ot | st )]︸ ︷︷ ︸reconstruction −E@ (st−1 |o≤t−1) [KL[@(st | o≤t )‖?(st | st−1)]]︸ ︷︷ ︸prediction + XE@ (st |o≤t )
[ ξ − s(1::)t 2]︸ ︷︷ ︸ supervised disentanglement loss ª®®®®®¬ (4)
Where ot is the observations, st the stochastic latent variables at time C, ξ are the : dimensional domain parameters and X tunes the supervised disentanglement strength.
4 EXPERIMENT - ODE PHASE SPACE DYNAMICS
4.1 DATASETS
In the phase-space experiments we compare the models on three well studied dynamical systems, the swinging pendulum, the Lotka-Volterra equations used to model prey-predator populations, and the planar 3-body system:
Under review as a conference paper at ICLR 2022
1.0 0.5 0.0 0.5 1.0
2
1
0
1
2
Pendulum
1.0 0.5 0.0 0.5 1.0
2
1
0
1
2
Pendulum
3 4 5 Prey
1.5
2.0
2.5
3.0
Pr ed
at or
Lotka-Volterra
3 4 5 Prey
1.5
2.0
2.5
3.0
Pr ed
at or
Lotka-Volterra
1 0 1 x
1.5
1.0
0.5
0.0
0.5
1.0
y
3-body system
1 0 1 x
1.5
1.0
0.5
0.0
0.5
1.0
y
3-body system
Ground truth MLP VAE VAE-SD Noisy input
The systems where chosen for varied complexity in terms of degrees of freedom, number of ODE equations and factors of variation. For the pendulum we consider one factor of variation, its length ;; Lotka-Volterra has 4 factors of variation U, V, W, X and the 3-body system has also 4 factors of variation 1, <1, <2, <3. Factors are drawn uniformly from a predetermined range which is the same between the training, validation and test sets. To further assess the OOD prediction accuracy, we create two additional test sets with factor values outside of the original range. We denote these datasets as OOD Test-set Easy and Hard, representing a smaller and bigger shift from the original range. As a visual example, the distribution of the factors of variation for the Lotka-Volterra system is illustrated in Figure 9 of the Appendix. The data were additionally corrupted with Gaussian noise. Dataset details can be found on Table 1 of the Appendix.
4.2 MODELS AND TRAINING
The main goal of these experiments is to assess whether OOD prediction can be improved by disentangling dynamical system parameters in the latent space of VAEs. We opt to use simple models to allow more experiments and comparisons. Our main baseline is the VAE upon which we propose two enhancements that leverage supervised disentanglement. The first VAE model, termed VAE-SD uses supervised disentanglement without a scaling function while the second model termed VAE-SSD uses an additional linear scaling function G(`I8 ) for the latent variable mean vector `I8 , as described in Section 3.2. Another baseline is a multilayer perceptron (MLP) autoencoder which allows comparison with a deterministic counterpart of the VAE. We also use supervised disentanglement on the latent neurons of the MLP, a model we refer to as MLP-SD. This enables us to assess if the privileged information can improve deterministic models. Lastly, we include an LSTM model, a popular choice for low dimensional sequence modelling (Yu et al., 2019), as a representative recurrent method.
Early experiments revealed a significant variance on the performance of the models, depending on hyperparameters. Under these conditions, we took various steps to make model comparisons as fair as possible. Firstly, all models have similar capacity in terms of neuron count. Secondly, we tune various hyperparameter dimensions, some of which are shared, while others are model-specific. Third, we conduct a thorough grid search on the hyperparameters to avoid undermining a model (details can be found in Tables 3, 4 and 5 of the Appendix). Lastly, we train the same number of experiments for all models which amounts to 1440 trained models in total, as summarized in Table 2 of the Appendix.
SSIM
0 200 400 600 800 Timestep
0.80
0.85
0.90
0.95
1.00 Test-set
RSSM RSSM-SD CNN-VAE CNN-VAE-SD
0 200 400 600 800 Timestep
0.80
0.85
0.90
0.95
1.00 OOD Test-set Easy
0 200 400 600 800 Timestep
0.80
0.85
0.90
0.95
OOD Test-set Hard
PSNR
4.3 RESULTS
For each dynamical system we focus on the performance on the three test-sets, the in-distribution test set, which shares the same parameter distribution with the training set, and the two OOD test-sets (Easy and Hard), which represent an increasing parameter shift from the training data. Models are compared on the cumulative Mean Absolute Error (MAE) between prediction and ground truth for the first 200 time-steps. We consider this to be sufficiently long-term as it is at least 20 times longer than the prediction horizon used during training. Long predictions are obtained by re-feeding the model predictions back as input. This approach has been shown to work well in systems where the dynamics are locally deterministic (Fotiadis et al., 2020). A summary of the quantitative results can be found in Figures 3 & 4 and Table 8. To account for the variability in the results, we present a summary of the best 5 runs of each model, selected by validation MAE. We generally observe that model performance is correlated to the distribution shift of test-sets, and this is consistent for all systems and models. The MAE is increasing as we move from the in-distribution test-set to the OOD Easy and Hard test-sets. Nevertheless, not all models suffer equally from the OOD performance drop.
Comparing the VAEs (Figure 3), we see that disentangled VAE models offer a substantial and consistent improvement over the VAE across all 3 dynamical systems. The improvement is more pronounced for the OOD test-sets where the distribution shift is greater, a strong indication that disentanglement of domain parameters is an inductive bias that can lead to better generalization. We also observe that VAE-SSD models the in-distribution data better that VAE-SD. This seems to come at a slight overfitting cost, because the VAE-SD provides better OOD extrapolation in most cases. This could be explained because the scaling function is dependent on min and max values of the factors of the training set. The extra information allows the model to better capture the training data but sacrifices some generalization capacity.
On the other hand, disentanglement results for the MLP are mixed. While in-distribution MLP-SD offers better results than the plain MLP, on the OOD test-sets, MLP-SD only performs favourably in the pendulum data. Furthermore in Lotka-Volterra, MLP-SD models are very unstable, and this is a drawback that affects some VAE-SD model too (see Table 9 of the Appendix). Probabilistic models seem better suited to capture the variation in the data. The contrast between VAE-SD and MLP-SD illustrates that making use of privileged information and latent space disentanglement are not trivial
and more work is needed to help us understand what works in practice and why. Lastly, the LSTM (Figure 11 & Table 8 of the Appendix) is only comparable in the pendulum dataset and only for small OOD shifts. Qualitative predictions can be found in Figure 5.
5 EXPERIMENT - VIDEO SEQUENCE DYNAMICS
In the first experiment we assessed supervised disentanglement for phase space prediction, where the states of the input trajectories are fully observable and only the domain parameters are unknown. This experiment extends the idea of supervised disentanglement to pixel-space input and output, where the physical states have to be inferred by the model.
5.1 DATASETS
The dynamical system we use in this experiment is the swinging pendulum, a common benchmark for modelling dynamics in video sequences (Brunton et al., 2020; Barber et al., 2021). We consider 4 factors of variation, the length ;, gravity 6 and initial angle \ and angular velocity l. Factors are drawn uniformly from a predetermined range. As before, we create a test-set and two additional OOD test-sets (Easy and Hard). The OOD sets have length and gravity values outside of the original range, while the initial conditions \, l are drawn from the same distribution. The distribution of the factors of variation for the test-sets is illustrated in Figure 2. The trajectories are first computed in phase space using a numerical simulator and then rendered as video frames of 64 × 64 pixels. More details about the dataset can be found in Section A.2 of the Appendix.
5.2 MODELS AND TRAINING
In this experiment we use two different models CNN-VAE and RSSM. CNN-VAE is described in Section 3.3 and architectural details can be found in Section B.2.1. During training the CNN-VAE the inner VAE is recursively used to predict, the number of recursions being a hyperparameter (Table 6 of the Appendix). We found that this type of training leads to more stable long term predictions. In total, 48 CNN-VAE models are trained half of which are with supervised disentanglement (CNN-VAE-SD). The RSSM model is a generative model including both a stochastic and deterministic component. We only use supervised disentanglement on the stochastic part, and term that model RSSM-SD. Disentanglement is applied all four factors of variation of the domain, despite only length and gravity varying between datasets. Detailed architectural and training details can be found in Section B.2.2 of the Appendix.
5.3 RESULTS
Figure 7 shows the quality of predictions on video pendulum on two metrics: structural similarity (SSIM) and peak signal-to-noise ratio (PSNR) as a function of predicted time distance. We select the models which have the best cumulative metrics over the first 800 timesteps on a validation set.
For the CNN-VAE, effects of disentanglement are more subtle. We observe that, in-distribution, the disentangled CNN-VAE-SD has very similar quality when compared to the CNN-VAE. For the OOD
datasets, though, disentanglement offers improved long-term predictions. The improvement is more noticeable on the OOD test-sets, indicating that disentanglement can help with OOD robustness. For RSSM, we first note that both models perform significantly better than the CNN-VAE models, which is expected since they are considered competitive in long-term video prediction. Disentanglement in RSSM seems to produce a trade-off. The plain RSSM model better in short-term prediction but its performance deteriorates with time, reaching VAE-CNN levels in all metrics. On the other hand, the RSSM-SD model provides the best long-term scores in all metrics and all datasets. Qualitative results in Figure 5 show that almost all models produce accurate short time predictions (approximately up to 200 time-steps). This further stresses the importance of disentanglement for long-term performance. In terms of OOD robustness, disentanglement also appears to be helping. While the RSSM-SD model lacks in short-term prediction quality on the in-distribution test-set, this performance gap closes as the OOD test-sets get harder. More specifically, on the in-distribution test-set the RSSM-SD overtakes RSSM in SSIM after around 400 frames, while in the OOD Easy and Hard test sets, this happens around 350 and 250 time-steps respectively. This narrowing gap indicates robustness improves with increasing distribution shifts. The above findings are corroborated by LPIPS (Zhang et al., 2018) comparisons (Figure 13 and Table 10 of the Appendix). Furthermore, the qualitative results show that all models accurately capture the appearance of the pendulum even long-term. Where they differ is on how well they capture the dynamics of the pendulum movement. This could offer an explanation why disentangling the domain from the dynamics is important, and why in practice offers better long-term and out-of-distribution performance.
Overall, experiments suggest that supervised disentanglement can be used to model dynamical systems in video sequences, resulting in improved long-term and OOD performance.
6 CONCLUSIONS
Using supervised disentanglement of domain parameters in generative models is a promising avenue for improving robustness. Our experiments show that it can improve both OOD generalization and long-term prediction of dynamical systems. This was demonstrated in phase-space with VAEs and also in video sequence modelling with state-of-the-art RSSMs.
By treating the domain parameters as factors of variation of the data and applying supervised disentanglement, several inductive biases are potentially enforced. First, the model in addition to prediction also performs “soft” system identification which acts as a regularizer. Second, it creates an implicit hierarchy such that some latent variables correspond to sequence-wide domain parameters and the rest capture instant dynamics. We speculate that this could additionally make the latent space more interpretable. Third, if the model can correctly extract the parameters this mean that the prediction is based on both of them which is closer to how numerical integrators work, where the domain is known. All of these could lead the model to learn the correct causal structure of the data. Nevertheless, using privileged information for OOD robustness is not always straightforward and requires further exploration. This is evident by the results of the MLP autoencoders which do not yield as consistent improvements. A criticism of our method could be that cheap privileged information is not always available and/or depends on using simulated data. Firstly, training on simulations is an increasingly appealing option because it is a cheap way to generate data to begin with. This is, also, clearly demonstrated by the many advancements on techniques like sim2real (Peng et al., 2017) that try to bring models trained in simulated data to the real world. So there seems to be no reason not to use the privileged information that comes with simulated data. Under that light supervised disentanglement can provide a pathway for real world applications where robustness in dynamical system prediction is critical. Applying the method to other datasets where there are more complex dynamic can increase its relevance. Sequence-wide parameters could also be exploited through self-supervision.
REPRODUCIBILITY STATEMENT
We provide all the necessary code to reproduce our experiments at the anonymous repo https: //anonymous.4open.science/r/dis-dyn-systems (will be de-anonymized after the review process). The repo contains code for generating all the datasets from scratch and also code for training all the models presented in this work. The README also contains instructions on how to train the models. The hyperparameters we have used are clearly and thoroughly presented in the
Appendix. These steps should significantly help others reproduce our experiments. For any further clarifications, you are encouraged to contact the corresponding author(s).
A DATASETS
A.1 PHASE SPACE
For simulations, we use an adaptive Runge-Kutta integrator with a timestep of 0.01 seconds. Each simulated sequence has a different combination of factors of variation. Simulation of the pendulum uses an initial angle \ which is randomly between 10> − 170> while the angular velocity l is 0. For the other two systems the initial conditions are always the same to avoid pathological configurations.
A.2 VIDEO PENDULUM
This data set contains image sequences of a moving pendulum under different conditions. The positions of the pendulum are first computed by a numerical simulator and then rendered in pixel space as frames of dimension 64 × 64. An example image sequence is shown in Figure 10. For the simulations, we use an adaptive Runge-Kutta integrator with a timestep of 0.05 seconds. The length of the pendulum, the strength of gravity and the initial conditions (position, momentum) are set to different values so that each trajectory slightly differs from the others. The initial angle and initial velocity are drawn from the same uniform distribution for all data sets. The initial angle ranges from 30◦ to 170◦ and the initial velocity ranges from −2A03/B to 2A03/B. For training, validation and in-distribution testing set, the gravity ranges from 8.0<2/B to 12.0<2/B, and the pendulum length ranges from 1.20< to 1.40<. In the easy out-of-distribution testing set, the gravity is between 12.0 − 12.5<2/B and the pendulum length is between 1.40 − 1.45<, while in the hard out-of-distribution testing set, the gravity is 12.5−13.0<2/B and the pendulum length is 1.45−1.50<. The distributions of these parameters are shown in Figure 2.
B TRAINING AND HYPERPARAMETERS
B.1 PHASE SPACE
During training the back-propagation is used after a single forward pass. The input and output of the models are smaller than the sequence size, so to cover the whole sequence we use random starting points per batch, both during training and testing. Both the VAE and MLP AE have an encoder with two hidden layers size 400,200 and a reverse decoder. The LSTM model has two stack LSTM cells with hidden size of 100, which results on an equivalent number of neurons. We used the Adam optimizer with 11 = 0.9 and 12 = 0.999. A scheduler for the learning rate was applied whose patience and scaling factor are hyperparameters. Maximum number of epochs was set to 2000 but we employed also early stopping using a validation set which led to significantly less epochs.
Table 5: 3-body system hyperparameters
MLP MLP-SD VAE VAE-SD LSTM
Input Size 50 Output Size 10 Hidden Layers [400, 200] 50,100 Latent Size 8, 16, 32 - Nonlinearity Leaky ReLU Sigmoid Learning rate 10−3, 10−4 10−3, 10−4 10−3, 10−4 10−3 Batch size 16, 32 16 16 16 16, 64, 128 Sched. patience 30, 40, 50, 60 30, 40, 50, 60 30, 40, 50, 60 30, 40, 50, 60 20, 30 Sched. factor 0.3, 0.4 0.3 0.3, 0.4 0.3, 0.4 0.3 Gradient clipping No No No No No Layer norm (latent) No No No No No Decoder W - - 10−5, 10−6 10−5, 10−6 - Sup. scaling - Linear - Linear - Supervision X - 0.05, 0.1, 0.2, 0.3 - 0.1, 0.2 -
# of experiments 96 96 96 96 96
B.2 VIDEO PENDULUM
B.2.1 CNN-VAE MODEL
Encoder has 4 layers convolutional layers with 32, 32, 64 and 64 maps respectively. The filter size is 3, padding is 1 and stride is 2. The last convolutional layer is flattened as a 256-dimensional vector to become the inner VAE input. The decoder 4 convolutional layers (64,64,32,32) with bi-linear upsampling. Model input and out is 20 frames. For the models without supervised disentanglement, a grid search is performed upon the V value, the size of the latent space, and the roll-out length during training. For the models with supervised disentanglement, a grid search is performed upon the V value, the size of the latent space, the time step of the data set, the roll-out length during training and the supervision multiplier. The detailed search grid is summarised in Table 6. Learning rate was 10−3 and an Adam optimizer (11 = 0.9 and 12 = 0.999) was used. We also used early stopping upon the cumulative reconstruction loss for the first 200 steps on a validation set with the max number of epochs being 1000.
B.2.2 RSSM MODELS
For the RSSM model we follow the architecture parameters as described in Hafner et al. (2018) & Saxena et al. (2021). For training we use sequences of 100 frames and batch size 100. All models were trained for 300 epochs with a learning rate of 10−3 and an Adam optimizer (11 = 0.9 and 12 = 0.999). During testing the model uses 50 frames as context (input). The parameters we tune appear in Table 7.
C PHASE SPACE RESULTS
Under review as a conference paper at ICLR 2022
1 0 1
3
2
1
0
1
2
3
Pendulum
1 0 1
3
2
1
0
1
2
3
Pendulum
3 4 5 Prey
1.5
2.0
2.5
3.0
Pr ed
at or
Lotka-Volterra
3 4 5 Prey
1.5
2.0
2.5
3.0
Pr ed
at or
Lotka-Volterra
1 0 1 x
1.5
1.0
0.5
0.0
0.5
1.0
y
3-body system
1 0 1 x
1.5
1.0
0.5
0.0
0.5
1.0
y
3-body system
Ground truth MLP VAE VAE-SD Noisy input
Figure 12: Model predictions in phase space. Trajectories are taken from the OOD Test-Set Hard of each system. The model input is noisy. The circle and bold ‘×’ markers denote the start and end of the ground truth trajectories respectively.
D VIDEO PENDULUM RESULTS
LPIPS | 1. What is the main contribution of the paper, and how does it address the issues in developing dynamical VAEs?
2. What are the strengths and weaknesses of the proposed approach, particularly in comparison to other works in the field?
3. How does the paper demonstrate the benefit of disentanglement, and what added gain comes from supervised disentanglement?
4. Can supervised loss ensure the domain components are fully disentangled from dynamics, and how can this be demonstrated?
5. What is the choice of loss in supervised disentanglement, and why was it chosen?
6. How does the interval of domain parameters relate to easy or hard problems, and how would the authors explain this from a dynamical system perspective?
7. What are some minor comments and technical inconsistencies in the paper that could be improved upon? | Summary Of The Paper
Review | Summary Of The Paper
In this paper, the authors propose a supervised approach to disentangle domain parameters from the dynamics in a deep latent variable model like VAE. Extending VAEs to dynamical systems is a relevant problem and has been a focus of interest in many recent works [1,2,3,4,5,6,7]. This paper identifies two issues for developing dynamical VAEs,
i) out of distribution generalisation
ii) long term trajectory prediction
The main contribution is to address the aforementioned issues using a supervised loss defined between latent variables and domain parameters. The authors present empirical experiments to support the idea.
Review
Pros:
A relevant problem in dynamical VAEs that is sufficiently motivated in the paper.
Empirical experiments on three problems: LV, video pendulum and the three-body problem. Demonstrate long term trajectory prediction and OOD on an easy and a hard task.
Have done a hyperparameter search and presented some ablation studies.
I find it is an interesting work. However, I strongly feel the authors have not adequately demonstrated the benefit of disentanglement and are missing comparison. My main concerns are below:
Lack of evidence on whether supervised loss disentangles dynamic parameters from domain parameters. The authors mention evaluation of disentanglement is beyond the scope. I beg to differ for two reasons:
the long term trajectory prediction doesn’t necessarily benefit from the disentanglement of latent factors. There are several methods that achieve good performance in long-term trajectory prediction without any explicit form of disentanglement, for example, Hamiltonian Neural Networks, Hamiltonian generative network (HGN), Symplectic RNN [4], Physics-as-Inverse-Graphics [7], Lagrangian Neural Networks [6], etc. In the related work section, the authors refer to HNNs and say disentanglement is not successfully addressed in such models. It would help if the authors could elaborate the sentence here. The HNNs learn Hamiltonians in a data-driven way and can make long term predictions. So why do they need to address disentanglement in the first place? It is unclear what added gain comes from supervised disentanglement and how it is advantageous over other state-of-the-art methods of long term trajectory predictions or OOD generalisation.
Can supervised loss ensure the domain components are fully disentangled from dynamics? I think it is critical to demonstrate whether domain variables are disentangled from the dynamics in any meaningful way? Could, for instance, fix the domain variables and draw samples by slowly changing the dynamic variable and vice-versa. Or better report disentanglement metrics.
A weak baseline. As referenced above, several works on extending VAEs to dynamical models have shown empirical and theoretical (symplectic structure) arguments for long-term trajectory prediction. It would be worth comparing those methods and demonstrating any benefit in using a supervised approach. In addition, models like SINDY [5] can discover dynamical parameters in an unsupervised way and have demonstrated benefits on long-term trajectory prediction. Without comparison, it is not evident what the immediate benefits of the supervised setup are? If it is OOD generalisation, authors should at least show this as a limitation in existing approaches.
It is not apparent what makes the interval of domain parameters an easy or hard problem. It would be beneficial to discuss from the dynamical system perspective.
The choice of loss in supervised disentanglement needs more explanation. In Section 3.2, it is L1 and in Section 3.3 is L2.
In Table 9, the results of VAE-SD and VAE-SSD are unstable in some cases. But this is not the case with LSTM or VAE. The authors should provide some discussion here and a potential explanation of the effect.
Minor comments:
In Table 1, the domain parameters of the train/val/test set are in the same range. It is likely for a model to perform well on val/test if it has seen sequences of the same parameters in training. Shouldn’t the two be selected differently?
Please number the equations.
Technical inconsistencies:
-In Section 3.2, the loss function is inconsistent with Figure 1. According to Figure 1, the input to VAE is x_n, and the prediction is x_{n+1}. The loss is a typical VAE plus a supervised disentangled term. There are no dynamics there. If the reconstruction term is supposed to be a prediction of x_{n+1} please add appropriate suffixes on x or z in
μ
x
(
z
;
θ
)
. If this is not the case, please provide details on how dynamics are taken into account.
Please use consistent scripts. In Section 3.2, the k components of latent variable z are written as under script z_{1:k} and in Section 3.3 for latent variables as superscripts s^{1:k}.
In loss formulation of Section 3.2, the domain parameters
ξ
i
are associated with sample
x
i
. As far as I understand, the time steps in a sequence share the domain parameters. It would be helpful to use a suitable script to express it consistently.
In the loss formulation of Section 3.3, the prediction model is in the state space. The domain parameters are shared over T; why use the prediction model on
s
t
instead of
d
−
k
components of
s
t
?
References:
[1] Chang MB, Ullman T, Torralba A, Tenenbaum JB. A compositional object-based approach to learning physical dynamics.
[2] Sanchez-Gonzalez A, Bapst V, Cranmer K, Battaglia P. Hamiltonian graph networks with ode integrators.
[3] Toth P, Rezende DJ, Jaegle A, Racanière S, Botev A, Higgins I. Hamiltonian generative networks.
[4] Chen Z, Zhang J, Arjovsky M, Bottou L. Symplectic recurrent neural networks.
[5] Champion K, Lusch B, Kutz JN, Brunton SL. Data-driven discovery of coordinates and governing equations.
[6] Cranmer M, Greydanus S, Hoyer S, Battaglia P, Spergel D, Ho S. Lagrangian neural networks.
[7] Jaques M, Burke M, Hospedales T. Physics-as-inverse-graphics: Joint unsupervised learning of objects and physics from video. |
ICLR | Title
Disentangled generative models for robust dynamical system prediction
Abstract
Deep neural networks have become increasingly of interest in dynamical system prediction, but out-of-distribution generalization and long-term stability still remains challenging. In this work, we treat the domain parameters of dynamical systems as factors of variation of the data generating process. By leveraging ideas from supervised disentanglement and causal factorization, we aim to separate the domain parameters from the dynamics in the latent space of generative models. In our experiments we model dynamics both in phase space and in video sequences and conduct rigorous OOD evaluations 1. Results indicate that disentangled VAEs adapt better to domain parameters spaces that were not present in the training data. At the same time, disentanglement can improve the long-term and out-of-distribution predictions of state-of-the-art models in video sequences.2
1 INTRODUCTION
The robust prediction of dynamical systems behaviour remains an open question in machine learning, and engineering in general. The ability to make robust predictions is important not only for forecasting systems of interest like weather (Garg et al., 2021; Ravuri et al., 2021) but even more so because it enables innovations in fields like system control, autonomous planning (Hafner et al., 2018) and computer aided engineering (Brunton et al., 2020). In this context, the use of deep generative models has recently gained significant traction for sequence modelling (Girin et al., 2020).
Robustness of machine learning models can be considered along two axes: long-term prediction and out-of-distribution (OOD) performance. Accurate long-term prediction can be notoriously difficult in many dynamical systems, because error accumulation can diverge in finite time (Zhou et al., 2020; Raissi et al., 2019), a problem that even traditional solvers can suffer from. More importantly, machine learning techniques are known to suffer from poor OOD performance (Goyal & Bengio, 2020), i.e. when they are employed in a setting they had not encountered during the training phase.
Before addressing the OOD problem, we must first define what constitutes as OOD in dynamical systems. We start by the observation that even simple dynamical systems, i.e the swinging pendulum or the =-body system, can have multiple continuous parameters that affect their evolution. These parameters can be manifested as differential equation coefficients, boundary or initial conditions etc. Our starting point is to consider distinct ranges of those parameters as separate domains. Under this view, it becomes apparent why OOD prediction of dynamical systems can be hard: capturing the whole range of those parameters in a single training set is unrealistic (Fotiadis et al., 2020) and further inductive biases are required (Miladinović et al., 2019; Bird & Williams, 2019; Barber et al., 2021). From a dynamical system point of view, different parameters can produce widely different trajectories in phase space. A motivating example can be bifurcations which occur when a small change in the parameters of a system causes a sudden qualitative change in its behaviour.
We focus on the inductive bias of disentangled representations for which the dynamics are separated from the domain parameters. Many approaches based on the use of neural networks try to jointly learn the dynamics and the physical parameters, which results in convoluted representations and usually leads to overfitting (Bengio et al., 2012). System identification can be used to extract parameters, but
1Code for reproducing our experiments at: https://anonymous.4open.science/r/ dis-dyn-systems/
2Animated phase-space and video predictions are available at: https://bit.ly/dis-dyn-systems
requires knowledge of the underlying system to be computationally effective (Ayyad et al., 2020). We, instead, leverage advances in Variational Autoencoders (VAEs) (Kingma & Welling, 2014) that enable learning disentangled representations. Disentanglement enables different latent variables to focus on different factors of variation of the data distribution, and has been applied in the context of image generation (Higgins et al., 2017; Kim & Mnih, 2018). This can be extended to modelling dynamical systems by looking at disentanglement from a causal perspective: from all the generative models which can have the same marginal distribution, identify the one with the true causal factors. To map this idea to sequence modelling we treat the domain parameters of a dynamical system as factors of variation. Recent findings (Locatello et al., 2018) emphasize the vital role of inductive biases from models or data for useful disentanglement. Unsupervised disentanglement, based on the assumption of domain stationarity, is a promising direction (Miladinović et al., 2019; Li & Mandt, 2018). Nevertheless, this leaves a wealth of ground truth domain parameters, which can be cheaply collected in simulated datasets. This type of privileged information originating from simulations has been shown to be effective for domain adaptation in computer vision tasks (Sarafianos et al., 2017; Lee et al., 2018). We thus use supervised disentanglement (Locatello et al., 2019) by leveraging the ground truth domain parameters. To the best of our knowledge, using domain parameters information this way, has not been previously explored in the dynamical system prediction setting.
Contributions While others have treated domain parameters as factors of variation in the data distribution, our work is the first, to the best of our knowledge, that explicitly uses privileged information from simulated data to disentangle those domain parameters from dynamics in a supervised way. We furthermore conduct experiments both in the low-dimensional phase space of 3 dynamical systems and the high-dimensional video rendering of a swinging pendulum. Disentanglement has, in the past, been mostly applied to VAEs because they are easily amenable to it. We additionally apply disentanglement on a more powerful, hybrid, model with both stochastic and deterministic parts (Hafner et al., 2018). In doing so, we not only assess disentanglement on a generative model outside boundaries of VAEs but furthermore we do it on a model which is considered state-of-the-art in long-term video prediction (Saxena et al., 2021). In all cases, the prediction performance is assessed both in-distribution and also in OOD settings of increasing degrees of distribution shift. To our understanding, this is the first time such a rigorous OOD test is performed. Our results in phase-space demonstrate that disentangled models can better capture the variability of dynamical systems compared to baseline models both in-distribution and OOD. In modelling dynamics in video sequences, results indicate that disentanglement is beneficial both for long-term prediction and OOD prediction.
Limitations This work focuses on dynamical system prediction. While the results can potentially open up many applications in general time-series modelling, this is out of the scope of this work. We prioritize to empirically study OOD downstream task performance and the inspection of the disentangled representations with appropriate metrics is left out of scope in this work.
2 RELATED WORK
VAEs and disentanglement Disentanglement aims to produce representations where separate factors of variation in the data are encoded into independent latent components. This can be seen as finding the true causal model of the data. While supervised disentanglement in generative models is a long-standing idea (Mathieu et al., 2016), information-theoretic properties can be leveraged to allow unsupervised disentanglement in VAEs (Higgins et al., 2017; Kim & Mnih, 2018). The impossibility result from (Locatello et al., 2018) suggested that disentangled learning is only possible by inductive biases coming either from the model or the data. Hence, the focus shifted back to semi- or weaklysupervised disentanglement approaches (Locatello et al., 2019; 2020). While most of these methods focus on disentanglement metrics, we opt to directly assess using a downstream prediction task.
Disentanglement in sequence modelling While disentanglement techniques are mainly tested in a static setting, there is a growing interest in applying it to sequence dynamics. Using a bottleneck based on physical knowledge, Iten et al. (2018) learn an interpretable representation that requires conditioning the decoder on time, but it can return physically inconsistent predictions in OOD data (Barber et al., 2021). Deep state-space models (SSMs) have also employed techniques for disentangling content from dynamics (Fraccaro et al., 2017; Li & Mandt, 2018), but, focus mostly on modelling variations in the content, failing to take dynamics into account. In hierarchical approaches (Karl et al., 2017), different layers of latent variables correspond to different timescales: for example,
in speech analysis for separating voice characteristics and phoneme-level attributes (Hsu et al., 2017). In an approach similar to our work, Miladinović et al. (2019) separate the dynamics from sequencewide properties in dynamical systems like Lotka-Volterra, but do so in an unsupervised way which dismisses a wealth of cheap information and only assesses OOD generalization in a limited way.
Feed-forward models for sequence modelling Deep SSM models are difficult to train as they require non-trivial inference schemes and a careful design of the dynamic model (Krishnan et al., 2015; Karl et al., 2017). Feed-forward models, with necessary inductive biases, have been used for sequence modelling in dynamical systems (Greydanus et al., 2019; Fotiadis et al., 2020). In works like Hamiltonian Neural Networks Greydanus et al. (2019) the domain is fixed; together with Barber et al. (2021), our work is an attempt in tackling domain variability.
Privileged information for domain adaptation. Using privileged information during training has been shown to help with domain adaptation in computer vision tasks. Using segmentation masks of simulated urban scenery can improve semantic segmentation on the target domain (Lee et al., 2018), while clip art data can help with domain transfer in an action recognition task (Sarafianos et al., 2017).
3 METHODS
3.1 VARIATIONAL AUTOENCODERS
Variational autoencoders (VAEs) (Kingma & Welling, 2014) offer a principled approach to latent variable modeling by combining a variational inference model @q (z |x) with a generative model ?\ (x|z). As in other approximate inference methods, the goal is to maximize the evidence lower bound (ELBO) over the data:
Lq,\ (x) = E@q (z |x) [log ?\ (x | z)] − ! (@q (z | x) | |?(z)) (1)
The first part of the ELBO is the reconstruction loss (in our case the prediction loss) and the second part is the Kullback-Leibler divergence that quantifies how close the posterior is to the prior.
Design choices for the model We use an isotropic unit Gaussian prior ?(z) = N(z | 0, I) which helps to disentangle the learned representation (Higgins et al., 2017). The approximate posterior (encoder) distribution is a Gaussian with diagonal covariance @q (z | x) = N (z | µI , I), allowing a closed form KL-divergence, while the decoder has a Laplace distribution ?\ (x | z) = Laplace (x | µG , WI) with constant diagonal covariance W > 0, which is tuned empirically. This leads to an !1 loss that provides improved results in some problems (Mathieu et al., 2018) and empirically works better in our case. The parameters µI ≡ µI (x; q), I ≡ diag [σI (x; q)]2, and µG ≡ µG (z; \) are computed via feed-forward neural networks.
3.2 DISENTANGLEMENT OF DOMAIN PARAMETERS IN LATENT SPACE
Apart from the disentanglement that stems from the choice of prior ?(z), we explicitly disentangle part of latent space so that it corresponds to the domain parameters of each input sequence. We achieve this by using a regression loss term Lξ (z1:: , ξ) between the ground truth factors of the domain parameters ξ ∈ R: and the output of the corresponding latents, z1:: . We, empirically, opted for an !1 loss, corresponding to a Laplacian prior with mean ξ and unitary covariance. Previous methods have reported that binary cross-entropy works better than !2 (Locatello et al., 2019) but this does not fit well in a setting like ours. We hypothesize that BCE works better because of the implicit scaling. To address this, we propose applying a function G(`I8 ) which linearly scales the `I8 between the min and max values of the corresponding factor of variation:
G ( `I8 ) = `I8 · (max(b8) −min(b8)) +min(b8) (2)
where b8 are the domain parameters and their corresponding minimum and maximum values of domain parameters from the training set. In all cases, the regression term is weighted by a parameter X which is empirically tuned. Plugging these choices in results in the following loss function:
Lq,\ (x) =E@q (z |x1:=) [ 1 W ‖x=+1:=+> − µG (z; \)‖1 ] + 3 log W (Prediction loss)
+ ‖σI (x1:=; q)‖22 − log diag [σI (x1:=; q)]2 + ‖µI (x1:=; q)‖22 (KL-Divergence) (3) + X ξG − G (µI1:: (x1:=; q)) 1} (Sup. disentangl. loss)
Using the reparameterization trick (Kingma & Welling, 2014), the loss is amenable to optimzation by stochastic gradient descent, with batch size =. The model architecture can be seen in Figure 1(left).
3.3 DISENTANGLEMENT FOR VIDEO DYNAMICS
We further investigate the effect of disentanglement in video sequence dynamics. To this end, two generative models are used. The first is derived from the VAE formulation of the previous section and is called CNN-VAE and is similar to the VAE with the addition of a convolutional encoder and a decoder. The encoder projects the input frames down to a low-dimensional space which can be thought as equivalent to the phase space of the system. A VAE is applied in this projection to predict in the future coordinates in the ”phase space”. The decoder then maps the predictions of the VAE back to pixel space. The schematic of the model can be seen in Figure 1(right).
The second model we use is the Recurrent State Space Model (RSSM) which has been successfully used for planning (Hafner et al., 2018). Since RSSM is a hybrid model combining deterministic and variational components, it allows us to assess disentanglement outside the limited scope of VAEs. Furthermore, using a state-of-the-art model in long-term video prediction (Saxena et al., 2021), allows
us to identify the limits of applying disentanglement in competitive models. The loss function we use shares the same formulation as in the original work of Hafner et al. (2018) with the addition of the supervised disentanglement loss. Since in the RSSM formulation there are latent variables for each time-step, we apply a disentanglement loss on all of them, which empirically is set to be !2:
L'(("−( = )∑ C=1 ©«E@ (st |o≤t [ln ?(ot | st )]︸ ︷︷ ︸reconstruction −E@ (st−1 |o≤t−1) [KL[@(st | o≤t )‖?(st | st−1)]]︸ ︷︷ ︸prediction + XE@ (st |o≤t )
[ ξ − s(1::)t 2]︸ ︷︷ ︸ supervised disentanglement loss ª®®®®®¬ (4)
Where ot is the observations, st the stochastic latent variables at time C, ξ are the : dimensional domain parameters and X tunes the supervised disentanglement strength.
4 EXPERIMENT - ODE PHASE SPACE DYNAMICS
4.1 DATASETS
In the phase-space experiments we compare the models on three well studied dynamical systems, the swinging pendulum, the Lotka-Volterra equations used to model prey-predator populations, and the planar 3-body system:
Under review as a conference paper at ICLR 2022
1.0 0.5 0.0 0.5 1.0
2
1
0
1
2
Pendulum
1.0 0.5 0.0 0.5 1.0
2
1
0
1
2
Pendulum
3 4 5 Prey
1.5
2.0
2.5
3.0
Pr ed
at or
Lotka-Volterra
3 4 5 Prey
1.5
2.0
2.5
3.0
Pr ed
at or
Lotka-Volterra
1 0 1 x
1.5
1.0
0.5
0.0
0.5
1.0
y
3-body system
1 0 1 x
1.5
1.0
0.5
0.0
0.5
1.0
y
3-body system
Ground truth MLP VAE VAE-SD Noisy input
The systems where chosen for varied complexity in terms of degrees of freedom, number of ODE equations and factors of variation. For the pendulum we consider one factor of variation, its length ;; Lotka-Volterra has 4 factors of variation U, V, W, X and the 3-body system has also 4 factors of variation 1, <1, <2, <3. Factors are drawn uniformly from a predetermined range which is the same between the training, validation and test sets. To further assess the OOD prediction accuracy, we create two additional test sets with factor values outside of the original range. We denote these datasets as OOD Test-set Easy and Hard, representing a smaller and bigger shift from the original range. As a visual example, the distribution of the factors of variation for the Lotka-Volterra system is illustrated in Figure 9 of the Appendix. The data were additionally corrupted with Gaussian noise. Dataset details can be found on Table 1 of the Appendix.
4.2 MODELS AND TRAINING
The main goal of these experiments is to assess whether OOD prediction can be improved by disentangling dynamical system parameters in the latent space of VAEs. We opt to use simple models to allow more experiments and comparisons. Our main baseline is the VAE upon which we propose two enhancements that leverage supervised disentanglement. The first VAE model, termed VAE-SD uses supervised disentanglement without a scaling function while the second model termed VAE-SSD uses an additional linear scaling function G(`I8 ) for the latent variable mean vector `I8 , as described in Section 3.2. Another baseline is a multilayer perceptron (MLP) autoencoder which allows comparison with a deterministic counterpart of the VAE. We also use supervised disentanglement on the latent neurons of the MLP, a model we refer to as MLP-SD. This enables us to assess if the privileged information can improve deterministic models. Lastly, we include an LSTM model, a popular choice for low dimensional sequence modelling (Yu et al., 2019), as a representative recurrent method.
Early experiments revealed a significant variance on the performance of the models, depending on hyperparameters. Under these conditions, we took various steps to make model comparisons as fair as possible. Firstly, all models have similar capacity in terms of neuron count. Secondly, we tune various hyperparameter dimensions, some of which are shared, while others are model-specific. Third, we conduct a thorough grid search on the hyperparameters to avoid undermining a model (details can be found in Tables 3, 4 and 5 of the Appendix). Lastly, we train the same number of experiments for all models which amounts to 1440 trained models in total, as summarized in Table 2 of the Appendix.
SSIM
0 200 400 600 800 Timestep
0.80
0.85
0.90
0.95
1.00 Test-set
RSSM RSSM-SD CNN-VAE CNN-VAE-SD
0 200 400 600 800 Timestep
0.80
0.85
0.90
0.95
1.00 OOD Test-set Easy
0 200 400 600 800 Timestep
0.80
0.85
0.90
0.95
OOD Test-set Hard
PSNR
4.3 RESULTS
For each dynamical system we focus on the performance on the three test-sets, the in-distribution test set, which shares the same parameter distribution with the training set, and the two OOD test-sets (Easy and Hard), which represent an increasing parameter shift from the training data. Models are compared on the cumulative Mean Absolute Error (MAE) between prediction and ground truth for the first 200 time-steps. We consider this to be sufficiently long-term as it is at least 20 times longer than the prediction horizon used during training. Long predictions are obtained by re-feeding the model predictions back as input. This approach has been shown to work well in systems where the dynamics are locally deterministic (Fotiadis et al., 2020). A summary of the quantitative results can be found in Figures 3 & 4 and Table 8. To account for the variability in the results, we present a summary of the best 5 runs of each model, selected by validation MAE. We generally observe that model performance is correlated to the distribution shift of test-sets, and this is consistent for all systems and models. The MAE is increasing as we move from the in-distribution test-set to the OOD Easy and Hard test-sets. Nevertheless, not all models suffer equally from the OOD performance drop.
Comparing the VAEs (Figure 3), we see that disentangled VAE models offer a substantial and consistent improvement over the VAE across all 3 dynamical systems. The improvement is more pronounced for the OOD test-sets where the distribution shift is greater, a strong indication that disentanglement of domain parameters is an inductive bias that can lead to better generalization. We also observe that VAE-SSD models the in-distribution data better that VAE-SD. This seems to come at a slight overfitting cost, because the VAE-SD provides better OOD extrapolation in most cases. This could be explained because the scaling function is dependent on min and max values of the factors of the training set. The extra information allows the model to better capture the training data but sacrifices some generalization capacity.
On the other hand, disentanglement results for the MLP are mixed. While in-distribution MLP-SD offers better results than the plain MLP, on the OOD test-sets, MLP-SD only performs favourably in the pendulum data. Furthermore in Lotka-Volterra, MLP-SD models are very unstable, and this is a drawback that affects some VAE-SD model too (see Table 9 of the Appendix). Probabilistic models seem better suited to capture the variation in the data. The contrast between VAE-SD and MLP-SD illustrates that making use of privileged information and latent space disentanglement are not trivial
and more work is needed to help us understand what works in practice and why. Lastly, the LSTM (Figure 11 & Table 8 of the Appendix) is only comparable in the pendulum dataset and only for small OOD shifts. Qualitative predictions can be found in Figure 5.
5 EXPERIMENT - VIDEO SEQUENCE DYNAMICS
In the first experiment we assessed supervised disentanglement for phase space prediction, where the states of the input trajectories are fully observable and only the domain parameters are unknown. This experiment extends the idea of supervised disentanglement to pixel-space input and output, where the physical states have to be inferred by the model.
5.1 DATASETS
The dynamical system we use in this experiment is the swinging pendulum, a common benchmark for modelling dynamics in video sequences (Brunton et al., 2020; Barber et al., 2021). We consider 4 factors of variation, the length ;, gravity 6 and initial angle \ and angular velocity l. Factors are drawn uniformly from a predetermined range. As before, we create a test-set and two additional OOD test-sets (Easy and Hard). The OOD sets have length and gravity values outside of the original range, while the initial conditions \, l are drawn from the same distribution. The distribution of the factors of variation for the test-sets is illustrated in Figure 2. The trajectories are first computed in phase space using a numerical simulator and then rendered as video frames of 64 × 64 pixels. More details about the dataset can be found in Section A.2 of the Appendix.
5.2 MODELS AND TRAINING
In this experiment we use two different models CNN-VAE and RSSM. CNN-VAE is described in Section 3.3 and architectural details can be found in Section B.2.1. During training the CNN-VAE the inner VAE is recursively used to predict, the number of recursions being a hyperparameter (Table 6 of the Appendix). We found that this type of training leads to more stable long term predictions. In total, 48 CNN-VAE models are trained half of which are with supervised disentanglement (CNN-VAE-SD). The RSSM model is a generative model including both a stochastic and deterministic component. We only use supervised disentanglement on the stochastic part, and term that model RSSM-SD. Disentanglement is applied all four factors of variation of the domain, despite only length and gravity varying between datasets. Detailed architectural and training details can be found in Section B.2.2 of the Appendix.
5.3 RESULTS
Figure 7 shows the quality of predictions on video pendulum on two metrics: structural similarity (SSIM) and peak signal-to-noise ratio (PSNR) as a function of predicted time distance. We select the models which have the best cumulative metrics over the first 800 timesteps on a validation set.
For the CNN-VAE, effects of disentanglement are more subtle. We observe that, in-distribution, the disentangled CNN-VAE-SD has very similar quality when compared to the CNN-VAE. For the OOD
datasets, though, disentanglement offers improved long-term predictions. The improvement is more noticeable on the OOD test-sets, indicating that disentanglement can help with OOD robustness. For RSSM, we first note that both models perform significantly better than the CNN-VAE models, which is expected since they are considered competitive in long-term video prediction. Disentanglement in RSSM seems to produce a trade-off. The plain RSSM model better in short-term prediction but its performance deteriorates with time, reaching VAE-CNN levels in all metrics. On the other hand, the RSSM-SD model provides the best long-term scores in all metrics and all datasets. Qualitative results in Figure 5 show that almost all models produce accurate short time predictions (approximately up to 200 time-steps). This further stresses the importance of disentanglement for long-term performance. In terms of OOD robustness, disentanglement also appears to be helping. While the RSSM-SD model lacks in short-term prediction quality on the in-distribution test-set, this performance gap closes as the OOD test-sets get harder. More specifically, on the in-distribution test-set the RSSM-SD overtakes RSSM in SSIM after around 400 frames, while in the OOD Easy and Hard test sets, this happens around 350 and 250 time-steps respectively. This narrowing gap indicates robustness improves with increasing distribution shifts. The above findings are corroborated by LPIPS (Zhang et al., 2018) comparisons (Figure 13 and Table 10 of the Appendix). Furthermore, the qualitative results show that all models accurately capture the appearance of the pendulum even long-term. Where they differ is on how well they capture the dynamics of the pendulum movement. This could offer an explanation why disentangling the domain from the dynamics is important, and why in practice offers better long-term and out-of-distribution performance.
Overall, experiments suggest that supervised disentanglement can be used to model dynamical systems in video sequences, resulting in improved long-term and OOD performance.
6 CONCLUSIONS
Using supervised disentanglement of domain parameters in generative models is a promising avenue for improving robustness. Our experiments show that it can improve both OOD generalization and long-term prediction of dynamical systems. This was demonstrated in phase-space with VAEs and also in video sequence modelling with state-of-the-art RSSMs.
By treating the domain parameters as factors of variation of the data and applying supervised disentanglement, several inductive biases are potentially enforced. First, the model in addition to prediction also performs “soft” system identification which acts as a regularizer. Second, it creates an implicit hierarchy such that some latent variables correspond to sequence-wide domain parameters and the rest capture instant dynamics. We speculate that this could additionally make the latent space more interpretable. Third, if the model can correctly extract the parameters this mean that the prediction is based on both of them which is closer to how numerical integrators work, where the domain is known. All of these could lead the model to learn the correct causal structure of the data. Nevertheless, using privileged information for OOD robustness is not always straightforward and requires further exploration. This is evident by the results of the MLP autoencoders which do not yield as consistent improvements. A criticism of our method could be that cheap privileged information is not always available and/or depends on using simulated data. Firstly, training on simulations is an increasingly appealing option because it is a cheap way to generate data to begin with. This is, also, clearly demonstrated by the many advancements on techniques like sim2real (Peng et al., 2017) that try to bring models trained in simulated data to the real world. So there seems to be no reason not to use the privileged information that comes with simulated data. Under that light supervised disentanglement can provide a pathway for real world applications where robustness in dynamical system prediction is critical. Applying the method to other datasets where there are more complex dynamic can increase its relevance. Sequence-wide parameters could also be exploited through self-supervision.
REPRODUCIBILITY STATEMENT
We provide all the necessary code to reproduce our experiments at the anonymous repo https: //anonymous.4open.science/r/dis-dyn-systems (will be de-anonymized after the review process). The repo contains code for generating all the datasets from scratch and also code for training all the models presented in this work. The README also contains instructions on how to train the models. The hyperparameters we have used are clearly and thoroughly presented in the
Appendix. These steps should significantly help others reproduce our experiments. For any further clarifications, you are encouraged to contact the corresponding author(s).
A DATASETS
A.1 PHASE SPACE
For simulations, we use an adaptive Runge-Kutta integrator with a timestep of 0.01 seconds. Each simulated sequence has a different combination of factors of variation. Simulation of the pendulum uses an initial angle \ which is randomly between 10> − 170> while the angular velocity l is 0. For the other two systems the initial conditions are always the same to avoid pathological configurations.
A.2 VIDEO PENDULUM
This data set contains image sequences of a moving pendulum under different conditions. The positions of the pendulum are first computed by a numerical simulator and then rendered in pixel space as frames of dimension 64 × 64. An example image sequence is shown in Figure 10. For the simulations, we use an adaptive Runge-Kutta integrator with a timestep of 0.05 seconds. The length of the pendulum, the strength of gravity and the initial conditions (position, momentum) are set to different values so that each trajectory slightly differs from the others. The initial angle and initial velocity are drawn from the same uniform distribution for all data sets. The initial angle ranges from 30◦ to 170◦ and the initial velocity ranges from −2A03/B to 2A03/B. For training, validation and in-distribution testing set, the gravity ranges from 8.0<2/B to 12.0<2/B, and the pendulum length ranges from 1.20< to 1.40<. In the easy out-of-distribution testing set, the gravity is between 12.0 − 12.5<2/B and the pendulum length is between 1.40 − 1.45<, while in the hard out-of-distribution testing set, the gravity is 12.5−13.0<2/B and the pendulum length is 1.45−1.50<. The distributions of these parameters are shown in Figure 2.
B TRAINING AND HYPERPARAMETERS
B.1 PHASE SPACE
During training the back-propagation is used after a single forward pass. The input and output of the models are smaller than the sequence size, so to cover the whole sequence we use random starting points per batch, both during training and testing. Both the VAE and MLP AE have an encoder with two hidden layers size 400,200 and a reverse decoder. The LSTM model has two stack LSTM cells with hidden size of 100, which results on an equivalent number of neurons. We used the Adam optimizer with 11 = 0.9 and 12 = 0.999. A scheduler for the learning rate was applied whose patience and scaling factor are hyperparameters. Maximum number of epochs was set to 2000 but we employed also early stopping using a validation set which led to significantly less epochs.
Table 5: 3-body system hyperparameters
MLP MLP-SD VAE VAE-SD LSTM
Input Size 50 Output Size 10 Hidden Layers [400, 200] 50,100 Latent Size 8, 16, 32 - Nonlinearity Leaky ReLU Sigmoid Learning rate 10−3, 10−4 10−3, 10−4 10−3, 10−4 10−3 Batch size 16, 32 16 16 16 16, 64, 128 Sched. patience 30, 40, 50, 60 30, 40, 50, 60 30, 40, 50, 60 30, 40, 50, 60 20, 30 Sched. factor 0.3, 0.4 0.3 0.3, 0.4 0.3, 0.4 0.3 Gradient clipping No No No No No Layer norm (latent) No No No No No Decoder W - - 10−5, 10−6 10−5, 10−6 - Sup. scaling - Linear - Linear - Supervision X - 0.05, 0.1, 0.2, 0.3 - 0.1, 0.2 -
# of experiments 96 96 96 96 96
B.2 VIDEO PENDULUM
B.2.1 CNN-VAE MODEL
Encoder has 4 layers convolutional layers with 32, 32, 64 and 64 maps respectively. The filter size is 3, padding is 1 and stride is 2. The last convolutional layer is flattened as a 256-dimensional vector to become the inner VAE input. The decoder 4 convolutional layers (64,64,32,32) with bi-linear upsampling. Model input and out is 20 frames. For the models without supervised disentanglement, a grid search is performed upon the V value, the size of the latent space, and the roll-out length during training. For the models with supervised disentanglement, a grid search is performed upon the V value, the size of the latent space, the time step of the data set, the roll-out length during training and the supervision multiplier. The detailed search grid is summarised in Table 6. Learning rate was 10−3 and an Adam optimizer (11 = 0.9 and 12 = 0.999) was used. We also used early stopping upon the cumulative reconstruction loss for the first 200 steps on a validation set with the max number of epochs being 1000.
B.2.2 RSSM MODELS
For the RSSM model we follow the architecture parameters as described in Hafner et al. (2018) & Saxena et al. (2021). For training we use sequences of 100 frames and batch size 100. All models were trained for 300 epochs with a learning rate of 10−3 and an Adam optimizer (11 = 0.9 and 12 = 0.999). During testing the model uses 50 frames as context (input). The parameters we tune appear in Table 7.
C PHASE SPACE RESULTS
Under review as a conference paper at ICLR 2022
1 0 1
3
2
1
0
1
2
3
Pendulum
1 0 1
3
2
1
0
1
2
3
Pendulum
3 4 5 Prey
1.5
2.0
2.5
3.0
Pr ed
at or
Lotka-Volterra
3 4 5 Prey
1.5
2.0
2.5
3.0
Pr ed
at or
Lotka-Volterra
1 0 1 x
1.5
1.0
0.5
0.0
0.5
1.0
y
3-body system
1 0 1 x
1.5
1.0
0.5
0.0
0.5
1.0
y
3-body system
Ground truth MLP VAE VAE-SD Noisy input
Figure 12: Model predictions in phase space. Trajectories are taken from the OOD Test-Set Hard of each system. The model input is noisy. The circle and bold ‘×’ markers denote the start and end of the ground truth trajectories respectively.
D VIDEO PENDULUM RESULTS
LPIPS | 1. What is the focus and contribution of the paper regarding dynamical system prediction?
2. What are the strengths and weaknesses of the proposed VAE-based disentanglement model?
3. Do you have any concerns regarding the comparison with unsupervised disentanglement models or the performance variances in the experiments?
4. How do you assess the significance of the results shown in Figures 3 and 4?
5. What are your suggestions for improving the applicability and representation analysis of the proposed method?
6. Is there a need for further discussion on the disentanglement metrics used in the study?
7. Can you explain the reason behind the choice of using domain parameters for supervision in training the VAE-based disentanglement model?
8. Are there any limitations in applying the proposed method to real-world data? | Summary Of The Paper
Review | Summary Of The Paper
The paper introduces a VAE-based disentanglement model for dynamical system prediction, which was trained under the supervision using domain parameters. The authors conducted experiments on simulated datasets and showed good performance for OOD cases and long-term predictions.
Review
The novelty of this work may be not enough for ICLR acceptance standards in that the authors applied existing VAEs with minor modifications to known problems.
I think the comparison with unsupervised disentanglement models is quite unfair because the proposed model is trained using strong supervision about data. Furthermore, the results showing that supervised disentanglement methods outperform unsupervised ones are trivial and not particularly impressive. Instead of the baselines designed by the authors, it would be better to add some comparisons with existing papers for dynamical system prediction (particularly in Figures 3 and 4).
I am not sure whether the performance differences in Figures 3 and 4 are statistically significant because the results of some models exhibit quite high variances and the number of examined models (i.e., 5) seems small. It would be better to conduct some statistical tests to show that the differences are meaningful.
The experiments were conducted only on simple simulated datasets. I think some experiments on real-world and/or more complex data are necessary to show the applicability of the proposed method.
I think a deeper analysis on disentangle representations is not out-of-scope and is necessary because the paper heavily relied on VAEs for disentanglement learning. (i) Are the latent dimensions obtained with the supervision (z_1:k) truly disentangled? (ii) What kind of information is encoded in the other features without the supervision (z_k+1:d)? It would be better to add quantitative results based on existing disentanglement metrics and/or visual results (latent traversal, embedding space visualization).
Regarding Figure 7, it would be better to add proper explanation about why RSSM is better for the initial timesteps than RSSM-SD.
It would be better to improve the presentation quality of Figure 5. It is difficult to identify the differences between the lines in the current version because they are largely overlapped. Simply changing linear axis scales into log scales may be helpful.
Is the reconstruction loss in page 4 replaced by the prediction loss as described in the caption of Figure 1? If so, please modify the reconstruction loss in page 4 to accurately show the prediction loss. |
ICLR | Title
Gated Domain Units for Multi-source Domain Generalization
Abstract
Distribution shift (DS) is a common problem that deteriorates the performance of learning machines. To tackle this problem, we postulate that real-world distributions are composed of elementary distributions that remain invariant across different environments. We call this an invariant elementary distribution (I.E.D.) assumption. The I.E.D. assumption implies an invariant structure in the solution space that enables knowledge transfer to unseen domains. To exploit this property in domain generalization (DG), we developed a modular neural network layer that consists of Gated Domain Units (GDUs). Each GDU learns an embedding of an individual elementary distribution that allows us to encode the domain similarities during the training. During inference, the GDUs compute similarities between an observation and each of the corresponding elementary distributions which are then used to form a weighted ensemble of learning machines. Because our layer is trained with backpropagation, it can naturally be integrated into existing deep learning frameworks. Our evaluation on image, text, graph, and time-series data shows a significant improvement in the performance on out-of-training target domains without domain information and any access to data from the target domains. This finding supports the practicality of the I.E.D. assumption and demonstrates that our GDUs can learn to represent these elementary distributions.
1 INTRODUCTION
A fundamental assumption in machine learning is that training and test data are independently and identically distributed (I.I.D.). This assumption ensures consistency-results from statistical learning theory, meaning that the learning machine obtained from an empirical risk minimization (ERM) attains the lowest achievable risk as sample size grows (Vapnik, 1998; Schölkopf, 2019). Unfortunately, a considerable amount of research and real-world applications in the past decades has provided a staggering evidence against this assumption (Zhao et al., 2018; 2020; Ren et al., 2019; Taori et al., 2020) (see D’Amour et al. (2020) for case studies). The violation of the I.I.D. assumption is usually caused by a distribution shift (DS) and can result in inconsistent learning machines (Sugiyama & Kawanabe, 2012), implying the loss of performance guarantee of machine learning models in the real world. Therefore, to tackle DS, recent work advocates for domain generalization (DG) (Blanchard et al., 2011; Muandet et al., 2013; Li et al., 2017; 2018b; Zhou et al., 2021a). This generalization to utterly unseen domains is crucial for robust deployment of the models in practice, especially when new, unforeseeable domains emerge after model deployment. However, the most important question that DG seeks to answer is how to identify the right invariance that allows for generalization.
The contribution of this work is twofold. First, we advocate that real-world distributions are composed of smaller “units” called invariant elementary distributions that remain invariant across different domains; see Section 2.1. Second, we propose to implement this hypothesis through so-called gated domain units (GDUs). Specifically, we developed a modular neural network layer that consists of GDUs. Each GDU learns an embedding of an individual elementary domain that allows us to express the domain similarities during training. For this purpose, we adopt the theoretical framework of reproducing kernel Hilbert space (RKHS) to retrieve a geometrical representation of each distribution in the form of a kernel mean embedding (KME) without information loss (Berlinet & Thomas-Agnan, 2004; Smola et al., 2007; Sriperumbudur et al., 2010; Muandet et al., 2017). This representation accommodates methods based on analytical geometry to measure similarities between distributions.
We show that these similarity measures can be learned and utilized to improve the generalization capability of deep learning models to previously unseen domains.
The remainder of this paper is organized as follows: Our theoretical framework is laid out in Section 2 with our modular DG layer implementation shown in Section 3. In Section 4, we outline related work. Our experimental evaluations are presented in Section 5. Finally, we discuss potential limitations of our approach and future work in Section 6.
2 DOMAIN GENERALIZATION WITH INVARIANT ELEMENTARY DISTRIBUTIONS
We assume a mixture component shift for the multi-source DG setting. This shift refers to the most common DS stating that the data is made up of different sources, each with its own characteristics, and their proportions vary between the training and test scenario (Quinonero-Candela et al., 2022). Our work thus differs in the assumption from related work in DG, in which the central assumption is the covariate shift (i.e., the conditional distribution of the source and test data stays the same) (David et al., 2010). In the following, let X and Y be the input and output space, with a joint distribution P. We are given a set of D labeled source datasets {Dsi }Di=1 with Dsi ⊆ X × Y . Each of the source datasets is assumed to be I.I.D. generated by a joint distribution Psi with support on X ×Y , henceforth denoted domain. The set of probability measures with support on X × Y is denoted by P . The multi-source dataset Ds comprises the merged individual source datasets {Dsj}Dj=1. We aim to minimize the empirical risk, see Section 3.3 for details. Important notation is summarized in Table 1.
2.1 INVARIANT ELEMENTARY DISTRIBUTIONS
Similar to Mansour et al. (2009; 2012); Hoffman et al. (2018a), we assume that the distribution of the source dataset can be described as a convex combination Ps = ∑D j=1 α s jPsj where αs = (αs1, . . . , α s D) is an element of the probability simplex, i.e.,
αs ∈ ∆D := {α ∈ RD |αj ≥ 0 ∧ ∑D
j=1 αj = 1}. In other words, αj quantifies the contribution of each individual source domain to the combined source domain.
In contrast, we generalize their problem descriptions: We express the distribution of each domain as a convex combination of K elementary distributions {Pj}Kj=1 ⊂ P , meaning that Ps = ∑K j=1 αjPj where α ∈ ∆K . Our main assumption is that these elementary distributions remain invariant across the domains. The advantage is that we can find an invariant subspace at a more elementary level, as opposed to when we consider the source domains as some sort of basis for all unseen domain. Figure 1 illustrates this idea.
Theoretically speaking, the I.E.D assumption is appealing because it implies the invariant structure in the solution space, as shown in the following lemma. The proof is given in Appendix A.1. Lemma 1. Let L : Y × Y → R+ be a non-negative loss function, F a hypothesis space of functions f : X → Y , and Ps(X,Y ) a data distribution. Suppose that the I.E.D assumption holds, i.e., there exist K elementary distributions P1, . . . ,PK such that any data distribution can be expressed as Ps(X,Y ) = ∑K j=1 αjPj(X,Y ) for some α ∈ ∆K . Then, the corresponding Bayes predictor f∗ ∈ argminf∈F E(X,Y )∼P[L(Y, f(X))] is Pareto-optimal with respect to a vector of elementary risk functionals (R1, . . . , RK) : F → RK+ where Rj(f) := E(X,Y )∼Pj [L(Y, f(X))].
Lemma 1 implies that, under the I.E.D assumption, Bayes predictors must belong to a subspace of F called the Pareto set FPareto ⊂ F which consists of Pareto-optimal models. The model f is said to be Pareto-optimal if there exists no g ∈ F such that Rj(g) ≥ Rj(f) for all j ∈ {1, . . . ,K} with Rj(g) > Rj(f) for some j; see, e.g., Sener & Koltun (2018, Definition 1). In other words, the I.E.D assumption allows us to translate the invariance property of data distributions to the solution space. Since Bayes predictors of all future test domains must lie within the Pareto set, which is a
strict subset of the original hypothesis space, it is still possible to identify the optimal predictors of future test domains, even without additional data from the test domains, except the I.E.D. assumption itself. Hence, given data from the training domains, it is sufficient for the purpose of generalization to maintain only solutions within this Pareto set during the training time.
Unfortunately, neither the elementary distributions nor the weights α are known in practice. Motivated by this theoretical insight, our DG layer presented in Section 3 is designed to uncover them from a multi-source training dataset Ds. While Lemma 1 shows the theoretical appeal of the I.E.D. assumption, we discuss below a situation in which it might hold in practice. The limitations will be discussed later in Section 6.
Real-world example. In this work, we postulate that the elementary domain bases are the invariant subspaces that allow us to generalize to unseen domains. In practice, the question arises if and when elementary domains evolve. Consider that we aim to learn to predict the risk of developing Diabetes from laboratory data from Europe and then infer the risk from data from the United States of America. Naturally, factors influencing the data-generating process may change, such as the level of physical activity and nutritional habits. While, to a certain degree, these common factors remain invariant across continents, each of these factors’ contributions may differ. In terms of our assumptions, we model each of these factors with a corresponding elementary distribution Pj . For a previously unseen individual, we can then determine the coefficients αsj and quantify each factor’s contribution without any information about the individual’s origin.
2.2 KERNEL MEAN EMBEDDING OF DISTRIBUTIONS
We leverage the KME of distributions (Berlinet & Thomas-Agnan, 2004; Smola et al., 2007; Muandet et al., 2017) to discover the elementary distributions and evaluate similarities between them. Let H be a reproducing kernel Hilbert space (RKHS) of real-valued functions on X with a reproducing kernel k : X × X → R (Schölkopf et al., 2001). The KME of a probability measure P ∈ P in the RKHS H is defined by a mapping ϕ(P) = µP := ∫ X k(x, ·) dP(x). We assume that the kernel k is characteristic, i.e., the mapping µP is injective (Fukumizu et al., 2004; Sriperumbudur et al., 2008). Theoretically, this essential assumption ensures that there is no information loss when mapping the distribution into H. Given the samples {x1, . . . , xn} generated I.I.D. from P, µP can be approximated by the empirical KME µ̂P = (1/n) ∑n i=1 k(xi, ·) = (1/n) ∑n i=1 ϕ(xi). We refer non-expert readers to Muandet et al. (2017) for a thorough review on this topic.
Challenges. Figure 1 depicts two challenges that come with our assumption of elementary distributions. First, since we do not have access to the samples from the hidden elementary distributions, the elementary KME cannot be estimated directly from the samples at hand. To overcome this challenge, we instead seek a proxy KME µVj := (1/N) ∑N k=1 ϕ(v j k) = (1/N) ∑N k=1 k(v j k, ·) for each elementary KME µPj from a domain basis Vj , where Vj = {v j 1, . . . , v j N} ⊆ X for all j ∈ {1, . . . ,M}. Hence, the KME µVj can be interpreted as the KME of the empirical probability measure P̂Vj = (1/N) ∑N k=1 δvjk
. Here, we assume that M = K. The sets Vj are referred to as elementary domain basis. Intuitively, the elementary domain basis V1, . . . , VM represents each elementary distribution by a set of vectors that mimic samples generated from the corresponding distribution. In Figure 1, V1 and V2 as well as their mapping in H visualize this first challenge. The second challenge is the objective of learning the unknown similarity between a single sample xi and an elementary domain Vj , which we denote by βij . Considering the advantage of KMEs, that is to tackle this challenge from a geometrical viewpoint, we quantify similarities between KMEs. For example, in Figure 1, the similarity between ϕ(xi) and µV1 (βi1) and µV2 (βi2) could be quantified as their distance or angle. These similarity coefficients enable our Domain Generalization Layer to represent a convex combination of elementary domain-specific learning machines, commonly known as ensembles. We introduce our layer in the following Section 3.
3 DOMAIN GENERALIZATION LAYER
This section aims to transfer the theoretical ideas presented in Section 2 into a deep learning framework. For the purpose of implementation, let x ∈ Rh×w denote the input data point and hξ : Rh×w → Re the feature extractor (FE) that maps the input into a low-dimensional representation x̃ ∈ Re. Then the prediction layer gθ : Re → Y infers the label y. To tackle the DG problem, we introduce a layer module called the gated domain unit (GDU). A GDU consists of three main components: (1) a similarity function γ : H×H → R that is the same for all elementary domains, (2) an elementary basis Vj and (3) a learning machine f(x̃, θj) for each elementary domain j ∈ {1, . . . ,M}. The architecture of the layer proposed herein is depicted in Figure 2.
Essentially, the process is as follows: First, the j-th GDU takes x̃i as an input and yields βij as an output. The KME of each domain basis Vj is required in order to apply γ to compute similarity between x̃i and Vj . These KMEs are obtained by ϕ(Vj) := µVj = (1/N) ∑N k=1 ϕ(v j k) =
(1/N) ∑N
k=1 k(v j k, ·). The GDU, therefore, has the task to allocate coefficients βij for each ele-
mentary domain based on a similarity function γ. The function γ outputs the βij = γ(ϕ(x̃i), µVj ) coefficients that in turn represent similarities between the KME of both, the corresponding domain basis Vj and the input x̃i. Theoretically speaking, µVj and the feature mapping ϕ(x̃i) are elements of the associated RKHS H, which allow us to evaluate similarities of non-linear features in a higher dimensional feature space. Each GDU is then connected to a learning machine f(x̃i, θj) that yields an elementary domain-specific inference. The final prediction of the layer is then an ensemble of
these learning machines gθ(x̃i) = ∑M
j=1 βijf(x̃i, θj) where θ = (θ1, . . . , θM ). In Figure 2, we give an overview of how data is processed and information is stored in the GDU.
In summary, GDUs leverage the invariant elementary distribution (I.E.D.) assumption and represent our algorithmic contribution: The elementary domain bases are stored as weights in the layer. Storing information as a weight matrix (i.e., domain memory) allows to learn the elementary domain bases efficiently using backpropagation. Hence, we avoid the dependency on problem-adaptive methods (e.g., domain-adversarial training) and domain information (e.g., domain labels).
3.1 DOMAIN SIMILARITY MEASURES
For the similarity function γ, we consider two similarity measures H(ϕ(x̃), µVj ), namely the cosine similarity (CS) (Kim et al., 2019) and maximum mean discrepancy (MMD) (Borgwardt et al., 2006; Gretton et al., 2012). To ensure that the resulting coefficients βi lie on the probability simplex, we apply the kernel softmax function (Gao et al., 2019) and interpret its output as the similarity between an observation x̃ and an elementary domain basis Vi. We get
βij = γ(ϕ(x̃i), µVj ) = exp
( κH(ϕ(x̃i), µVj ) )∑M k=1 exp ( κH(ϕ(x̃i), µVk)
) , (1) where κ > 0 is a positive softness parameter for the kernel softmax. Geometrically speaking, these similarities correspond to the angle and distance of two KMEs in the RKHS H. The function ϕ maps the observation x̃ and domain basis Vj into H meaning that ϕ(x̃) = µδx̃ = k(x̃, ·) is the KME of a Dirac measure δx̃ and ϕ(Vj) = µVj = (1/N) ∑N k=1 k(v j k, ·).
CS. The CS function H(ϕ(x̃i), µVj ) = ⟨ϕ(x̃i),µVj ⟩H
∥ϕ(x̃i)∥H∥µVj ∥H is used as an angle-based similarity.
MMD. We consider the MMD for calculating a distance-based similarity measure. The distance is then given as ∥ϕ(x̃i) − µVj∥H. Subsequently, the similarity function H is the negative MMD: H(ϕ(x̃i), µVj ) = −∥ϕ(x̃i)−µVj∥H. The intuition behind the negative MMD is to put higher weights on samples that are closer to the KME of an elementary domain basis.
3.2 PROJECTION-BASED GENERALIZATION
For classification tasks, we introduce an alternative approach to infer the βi coefficients that is based on the idea of kernel sparse coding (Gao et al., 2010; 2013). Herein the goal is to find an approximated representation of each feature mapping ϕ(x̃i) using the elements of a dictionary {µVj}Mj=1. This approach allows us to approximate the feature mapping with these elements by ϕ(x̃i) ≈ ∑M j=1 βijµVj . In contrast to the aforementioned approaches, an elementary domain KME µVj does not necessarily represent the KME of an elementary distribution µPj . Therefore, we present another approach that aims to find a set {µVj}Mj=1 that permits µPs to be represented as a linear combination. Since P is assumed to be a convex combination of elementary distributions, we can find a linear combination to represent µPs by the domain KMEs µVj , as long as µPs ∈ HM := span{µVj | j = 1, . . . ,M}. The RKHS HM is a subspace of the actual RKHS H, which allows us to represent elements of H at least approximately in the subspace HM . By keeping the HM large, we gain more representative power. To make HM as large as possible, we have to ensure its spanning elements are linearly independent or, even better, orthogonal. Orthogonal KMEs ensure two desirable properties. First, pairwise orthogonal elements in HM guarantee no redundancy. Second, having orthogonal elements allows us to make use of the orthogonal projection. This projection geometrically yields the best approximation of ϕ(x̃) in HM . In other words, we can achieve the best possible approximation of the feature mapping by using its orthogonal components (see Proposition 3.1). The orthogonal projection is given by
ΠHM : H → HM , ϕ(x̃) 7→ M∑ i=1 ⟨ϕ(x̃), µVj ⟩H ∥µVj∥2H µVj . (2)
Proposition 3.1. For a KME µP of a given mixture distribution P the following holds µP ∈ span{µVj |Vj ,∀j = 1, . . . ,M}, where ⟨µVi , µVj ⟩H = 0,∀i ̸= j (i.e., the KME of the elementary domains basis are pairwise orthogonal). The value of the function ∑M j=1∥µP − βjµVj∥2Hk is minimal if the coefficients are set as β∗j = ⟨µP, µVj ⟩H/∥µVj∥2H.
The Proposition 3.1 can be used to give an approximation of µP by projecting it into HM , i.e., µP ≈ ∑M j=1 βjµVj , where βj = ⟨µP, µVj ⟩H/∥µVj∥2H. This best approximation property is the main advantage of our assumption in Proposition 3.1 (i.e., having orthogonal KME) and thus a potential advantage of projection-based DG. Appendix A.2 provides the proof of Proposition 3.1.
3.3 MODEL TRAINING
For model training, we adapt the domain adaptation (DA) framework from Zhuang et al. (2021). Thus, our learning objective function is formalized as L(g) + λDΩD(∥g∥H). The goal of the training can be described in terms of the two components of this function. Consider a batch of training data {x1, . . . , xb}, where b is the batch size. During training, we minimize the loss function L(g) = 1b ∑b i=1 L(ŷi, yi) = 1 b ∑b i=1 L( ∑M j=1 γ(ϕ(x̃i), µVj )fj(x̃i), yi) for an underlying task and the respective batch size. In addition, our objective is that the model learns to distinguish between different domains. Thus, the regularization ΩD is introduced to control the domain basis. In our case, we require the regularization ΩD to ensure that the KMEs of the elementary domain basis are able to represent the KMEs of the elementary domains. Therefore, we minimize the MMD between the feature mappings ϕ(x̃i) and the associated representation ∑M j=1 βijµVj . Note that βij = γ(ϕ(x̃i), µVj ). Hence, the regularization ΩD = Ω OLS D is defined as Ω OLS D ( ∥g∥H ) = 1 b ∑b i=1 ∥ϕ(x̃i)− ∑M j=1 βijµVj∥2H (see Appendix B.2 for details). The intuition is the objective to represent each feature mapping ϕ(x̃i) by the domain KMEs µVj . Thus, we try to minimize the MMD between the feature map and a combination of µVj . The minimum of the stated regularization can be interpreted as the ordinary least square-solution of a regression-problem of ϕ(x̃i) by the components of HM . In other words, we want to ensure that the basis Vj is contained in feature mappings ϕ(x̃i). In the particular case of projection, we want the KME of the elementary domain to be orthogonal to ensure high expressive power. For this purpose, the additional term Ω⊥D will be introduced to ensure the desired orthogonality. Considering a kernel function with k(x, x) = 1, orthogonality would require the Gram matrix Kij = ⟨µVi , µVj ⟩H to be close to the identity matrix I . There are a variety of methods for regularizing matrices available (Xie et al., 2017; Bansal et al., 2018). A well-known method to ensure orthogonality is the soft orthogonality (SO) regularization Ω⊥D = λ∥K − I∥2F (Bansal et al., 2018). As pointed out by Bansal et al. (2018), the spectral restricted isometry property (SRIP) and mutual coherence (MC) regularization can be a promising alternative for SO and thus are additionally implemented in the DG layer. Hence, in the case of projection, the regularization is given by ΩD ( ∥g∥H ) = λOLSΩ OLS D ( ∥g∥H ) + λORTHΩ ⊥ D ( ∥g∥H ) , λOLS , λORTH ≥ 0.
Lastly, sparse coding is an efficient technique to find the least possible basis to recover the data subject to a reconstruction error (Olshausen & Field, 1997). Several such applications yield strong performances, for example in the field of computer vision (Lee et al., 2007; Yang et al., 2009). Kernel sparse coding transfers the reconstruction problem of sparse coding into H by using the mapping ϕ, and, by applying a kernel function, the reconstruction error is quantified as the inner product (Gao et al., 2010; 2013). To ensure sparsity, we apply the L1-norm on the coefficients β and add ΩL1D (∥γ∥) := ∥γ(ϕ(x̃i), µVj )∥1 to the regularization term ΩD with the corresponding coefficient λL1 . Appendix B.3 gives a visual overview of the model training.
4 RELATED WORK
DG, also known as out-of-distribution (OOD) generalization, is among the hardest problems in machine learning (Blanchard et al., 2011; Muandet et al., 2013; Arjovsky et al., 2019). In contrast, DA, which predates DG and OOD problems, deals with a slightly simpler scenario in which some data from the test distribution are available (Ganin et al., 2015). Hence, based on the available data, the task is to develop learning machines that transfer knowledge learned in a source domain specifically to the target domain. Approaches pursued in DA can be grouped primarily into (1) discrepancy-based
DA (Sun et al., 2016; Peng & Saenko, 2018; Ben-David et al., 2010; Fang et al., 2020; Tzeng et al., 2014; Long et al., 2015; Baktashmotlagh et al., 2016) (2) adversary-based DA (Tzeng et al., 2017; Liu & Tuzel, 2016; Ganin et al., 2015; Long et al., 2018), and (3) reconstruction-based DA (Bousmalis et al., 2016; Hoffman et al., 2018b; Kim et al., 2017; Yi et al., 2017; Zhu et al., 2017; Ghifary et al., 2014). In DA, learning the domain-invariant components requires access to unlabeled data from the target domain. Unlike problems in DA, where the observed data from the test domains can be used to find the most appropriate invariant structures (Ben-David et al., 2010), the lack thereof in DG calls for a postulation of invariant structure that will enable the OOD generalization.
To enable generalization to unseen domains without any access to data from them, researchers have made significant progress in the past decade and developed a broad spectrum of methodologies (Zhou et al., 2021a;c; Li et al., 2019; Blanchard et al., 2011). For thorough review see, e.g., Zhou et al. (2021a); Wang et al. (2021). Existing works can be categorized into methods based on domaininvariant representation learning (Muandet et al., 2013; Li et al., 2018b;d), meta-learning (Li et al., 2018a; Balaji et al., 2018), data augmentation (Zhou et al., 2020), to name a few. Another recent stream of research from a causal perspective includes invariant risk minimization (Arjovsky et al., 2019), invariant causal prediction (Peters et al., 2016), and causal representation learning (Schölkopf et al., 2021). The overall motivation here is to learn the representation that is robust to domain-specific spurious correlations. In other words, it is postulated that “causal” features are the right kind of invariance that will enable OOD generalization. Despite the successful applications, DG remains a challenging research gap.
We differentiate our work from existing ones as follow. First, we postulate the existence of domaininvariant structure at the distributional level rather than at the data representation, which is a common assumption in DG. This is motivated by theoretical results (Mansour et al., 2009; Hoffman et al., 2018a), stating that a distribution-weighted combination of source hypotheses represents the ideal hypothesis. Furthermore, our distributional assumption, as we argued in Section 2, generalizes previous work that proposes to use domain-specific knowledge to tackle the problem of DG from a more elementary setting. For example, approaches such as Piratla et al. (2020); Monteiro et al. (2021) can be compared to our GDUs as domain-specific predictors, in the special case, where each elementary domain represents a single source domain. However, GDUs do not assume the existence of a single common classifier for all the domains, providing a combination of multiple common classifiers shared between different source domains.
Second, we incorporate the I.E.D. assumption directly into our model’s architecture, as shown in Figure 2. Designing effective architectures for DG has been largely neglected (Zhou et al., 2020, Sec. 4.1). Last, we do not assume access to domain information. Although obtaining such information can be difficult in practice, see our short discussion in Appendix C.4 (Niu et al., 2017), DG methods that can deal with their absence (e.g., Huang et al. (2020); Carlucci et al. (2019); Li et al. (2018c)) are yet scarce (Zhou et al., 2020, Sec. 4.2).
5 EXPERIMENTS
Since ERM is one of the strongest baselines in DG (Gulrajani & Lopez-Paz, 2020; Koh et al., 2021), we, first, compare our approach compared to ERM and ensemble learning (Table 2 and Appendix C.1). Second, we benchmark our approach to state-of-the-art DG (e.g., CORAL, LISA, IRM, FISH, Group DRO) methods focusing on image, graph, and text data (Table 3 and Appendix C.4). Third, we analyse the GDUs robustness gainst DS that occurs in daily clinical practice (Table 12 and Appendix C.3). Finally, in Appendix C.2, we conduct an ablation study focusing on the representation learned during training (Appendix C.2.2). In our experiments, we distinguish two modes of training the DG layer: fine tuning (FT), where we extract features using a pre-trained model, and end-to-end training (E2E), where the FE and the DG layer are jointly trained1.
5.1 PROOF-OF-CONCEPT BASED ON DIGITS CLASSIFICATION
Following Feng et al. (2020) among others, we create a multi-source dataset by combining five publicly available digits image datasets, namely MNIST (Lecun et al., 1998), MNIST-M (Ganin & Lempitsky, 2015), SVHN (Netzer et al., 2011), USPS, and Synthetic Digits (SYN) (Ganin &
1All source code is made available on GitHub.
Lempitsky, 2015). The task is to classify digits between zero and nine. Each of these datasets is considered an out-of-training target domain which is inaccessible during training, and the remaining four are the source domains. Details are given in Appendix C.1. Table 2 summarizes the results for the most challenging out-of-training target domain, namely MNIST-M. In Appendix C.1, we provide the results on the remaining target domains in Table 7 and a discussion heuristics for choosing hyperparameters for our GDUs. Our method noticeably improves for all datasets mean accuracy and decreases the standard deviation in comparison to the ERM and ensemble baselines, making the results more stable across the ten iterations reported.
Table 2: Results Digits experiment. The mean (standard deviation) accuracy for ten runs is reported. Best results are bold.
MNIST-M
ERM Single 63.00 (3.20)
Ensemble 62.87 (1.50)
CS 68.55 (0.80)
FT MMD 68.62 (0.70)
PROJECTION 68.56 (0.91)
CS 69.25 (0.61)
E2E MMD 69.04 (0.83)
PROJECTION 68.67 (0.98)
We also compare our methods with related work that uses domain information and data augmentation, based on the results of Li et al. (2021) (Table 9 in Appendix C.1). Although data augmentation in DG is a comparatively strong approach and, at the same time, we do not use domain information; we obtain comparable results to the baselines reported by Li et al. (2021).
Ablation study We chose the digits dataset to analyze each component of our DG layer in 1st paragraph in Appendix C.1 and C.2. We (A) vary M , N on Figure 9, and the strength of the regularization terms on Figure 6, Figure 7, and Figure 8 to assess the sensitivity of the DG layer to the choice of hyperparameters, (B) visualize the output of the FE (Figure 11). Our ablation study in (A) reveals stable results across different sets of hyper-parameters. While the layer is not sensitive to the choice of regularization strength, we recommend not to omit the regularization completely, although the computational expenses decrease without the orthogonal regularization. As an illustration in (B), we project the output of the FE trained with a dense layer (ERM) and with the DG layer by t-SNE (t-distributed stochastic neighbor embedding). The GDU-trained FE yields more concentrated and bounded clusters in comparison to the one trained by ERM. Hence, we observe a positive effect on the representation learned by the FE.
5.2 WILDS BENCHMARK
To challenge the I.E.D. assumption and the OOD generalization capabilities of the GDUs, we use WILDS, a curated set of real-world experiments for benchmarking DG methods (Koh et al., 2021). Further, WILDS is a semi-synthetic dataset set that operates under similiar assumptions as the source component shift (Koh et al., 2021). We consider the following eight datasets: Camelyon17, FMoW, FMoW, Amazon, iWildCam, and RxRx1, OGB-MolPCBA, Civil-Comments, and PovertyMap, which represent the task of real-world DG. We closely follow Koh et al. (2021) for the experiments. Details on datasets and benchmark methods are given in Appendix C.4. We present our benchmarking in Table 3. Our results are achieved out-of-the-box (i.e., default parameters) since hyperparameter optimization has a substantial impact on the generalization performance (Gulrajani & Lopez-Paz, 2020), and we aim to highlight the improvements solely attributable to our GDUs.
First, we observe the strengths and weaknesses of the benchmarks in the different data sets, all of which are lower than ERM at least once. In contrast, although GDUs show similar behavior across the datasets, performing very well for some datasets (e.g., FMoW, Poverty Map), they, however, do not fall below ERM across all GDU experiments conducted. In addition, the baselines require domain information. Our approach requires less information, yet, achieving comparable results to the benchmarks.
5.3 ECG EXPERIMENT
The PhysioNet/Computing in Cardiology Challenge 2020 (Perez Alday et al., 2021; Goldberger et al., 2000; Perez Alday et al., 2020) aims to identify clinical diagnoses from 12-lead ECG recordings from 6 different databases. This publicly available pooled dataset contains 43,101 recordings sampled with various sampling frequencies and lengths. Each recording is labeled as having one or more of 24 cardiac abnormalities; hence, the task is to perform a multi-label binary classification. For our experiment, we iterate over the databases, taking one at a time as the test domain while utilizing
the remaining five databases for training. The performance was measured according to the original PhysioNet challenge score. This generalized intersection-over-union score assigns partial credit to misdiagnoses that result in similar treatments or outcomes. The score is then adjusted for a solution that always selects the normal/majority class and normalized for the perfect solution. Therefore, the score can have negative values and a best possible score of 1.
Table 4 reports results for the ECG experiments (see Appendix C.3 for details). For this clinical time-series data, we observe an improvement in mean score and a reduction in standard deviation over the ERM and ERM ensemble baselines across all DG tasks. We attribute poorer performance for the PTB dataset to the fact that it contains considerably longer recordings than other datasets (except for INCART which, however, contains only 75 samples) and a higher sampling rate (1000Hz vs. 500Hz and 257Hz). The negative challenge score for the PTB-XL dataset is due to the presence of previously unobserved labels in other datasets as well as a considerably smaller amount of data for training since the PTB-XL dataset comprises the majority of all samples (21,837 out of 43,101).
CS 0.1830 (0.0061) 0.2950 (0.0035) 0.1595 (0.0313) -8.8802 (0.1069) -0.1932 (0.0168) 0.1853 (0.0036)
FT MMD 0.1877 (0.0077) 0.3011 (0.0035) 0.2100 (0.0413) -8.8082 (0.1458) -0.1567 (0.0211) 0.1919 (0.0036)
6 CONCLUSION AND DISCUSSIONS
We introduced the I.E.D. assumption, postulating that real-world distributions are composed of elementary distributions that remain invariant across different domains and showed that it implies an invariant structure in the solution space that enables knowledge transfer to unseen domains. Empirical results based on real-world data support the practicality of the I.E.D. assumption and that we can learn such a representation. Further, we presented a modular neural network layer consisting of Gated
Domain Units (GDUs) that leverage the I.E.D. assumption. Our GDUs can substantially improve the downstream performance of learning machines in real-world DG tasks. Across our experiments, we observed that for some datasets FT is better than E2E and vice versa. In E2E training, the feature extractor (encoder) is jointly trained with GDUs. Hence, the latent representation is stochastic during training, meaning that we have variability in the representation fed into GDUs between epochs. In contrast, in FT, the feature extractor is pretrained and always produces the same embedding. Especially with large feature extractors such as ResNet-50, learning the elementary domains can be more effective when we avoid any stochasticity in the latent representation.
Limitations. A major limitation of our I.E.D. assumption is to provide theoretical evidence that this assumption holds in practice. We aim to expand the scope of the theoretical understanding of the I.E.D. assumption and the GDUs. In addition, the particular theoretical setting of Albuquerque et al. (2019) (i.e., each elementary domain represents a source domain) seems promising to extend their generalization guarantee to cases where our I.E.D. assumption holds. Second, our GDU layer induces additional computational overhead due to the regularization and model size that increases as a function of the number of elementary domains. Noteworthy, our improvement is achieved with a relatively small number of elementary domains indicating that the increased complexity is not a coercive consequence of applying the DG layer. Also, the results achieved are not a consequence of increased complexity, as the ensemble baseline shows.
Future work We expect the I.E.D. assumption and GDUs to be adapted, yielding novel applications that tackle DG. For example, we suggest dynamically increasing the number of elementary domains during learning until their distributional variance reaches a plateau as a measure of their heterogeneity. Hence, one would learn the number of elementary domains instead of fixing the number of elementary domains prior to training.
Appendices
Table of Contents
A Proofs 19
A.1 Proof of Lemma 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 A.2 Proof of Proposition 3.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
B Details on the Gated Domain Units 20
B.1 Real-world example: Visualizations . . . . . . . . . . . . . . . . . . . . . . . . 20 B.2 Detailed View of the Regularization Term ΩOLSD . . . . . . . . . . . . . . . . . 20 B.3 Visualization of DG Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
C Experiments 22
C.1 Digits Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 C.2 Ablation Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 C.3 ECG Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 C.4 WILDS Benchmarking Experiments . . . . . . . . . . . . . . . . . . . . . . . 31
A PROOFS
A.1 PROOF OF LEMMA 1
Proof. The result holds trivially for K = 1. For K ≥ 2 and by the I.E.D assumption, Ps(X,Y ) = ∑K j=1 αjPj(X,Y ) for some α ∈ ∆K . Then, we can write the risk functional
for each f ∈ F as R(f) = ∫ L(y, f(x)) dPs(x, y) = ∫ L(y, f(x)) d( ∑K j=1 αjPj(x, y)) =∑K
j=1 αj ∫ L(y, f(x)) dPj(x, y) = ∑K j=1 αjRj(f) where Rj : F → R+ is the elementary risk functional associated with the elementary distribution Pj(X,Y ). Hence, the Bayes predictors satisfy
f∗ ∈ argmin f∈F R(f) = argmin f∈F K∑ j=1 αjRj(f). (A.3)
Since the rhs of equation A.3 corresponds to the linear scalarization of a multi-objective function (R1, . . . , RK), its solution (i.e., a stationary point) is Pareto-optimal with respect to these objective functions (Ma et al., 2020, Definition 3.1); see, also, (Hillermeier, 2001a;b). That is, the Bayes predictors for the data distribution that satisfies the I.E.D assumption must belong to the Pareto set FPareto := {f∗ : f∗ = argminf∈F ∑K j=1 αjRj(f), α ∈ ∆K} ⊂ F .
A.2 PROOF OF PROPOSITION 3.1
Proof. Suppose we have a representation,
µP = M∑ j=1 βjµVj ⟨µVi , µVi⟩H = 0∀i ̸= j, (A.1)
i.e. {µV1 , . . . , µVm} are pairwise orthogonal. We want to minimize the MMD by minimizing
∥µP − M∑ j=1
βjµVj∥2H = ⟨µP, µP⟩H︸ ︷︷ ︸ ∥µP∥2H=
−2⟨µP, M∑ j=1 βjµVj ⟩H + ⟨ M∑ i=1 βiµVi , M∑ j=1 βjµVj ⟩H (A.2)
= ∥µP∥2H − 2 M∑ j=1 βj⟨µP, µVj ⟩H + M∑ i=1 M∑ j=1
βiβj ⟨µVi , µVj ⟩H︸ ︷︷ ︸ δij⟨µVi ,µVj ⟩H=
(A.3)
= ∥µP∥2H − 2 M∑ j=1 βj⟨µP, µVj ⟩H + M∑ j=1 β2j ∥µVj∥2H . (A.4)
By defining
Φ(β) := ∥µP − M∑ j=1 βjµVj∥2H , (A.5)
we can simply find the optimal βj by using the partial derivative
∂Φ ∂βj = −2⟨µP, µVj ⟩H + 2βj∥µVj∥2H ! = 0 (A.4)
⇔ βj∥µVj∥2H = ⟨µP, µVj ⟩H (A.5)
⇔ β∗j = ⟨µP, µVj ⟩H ∥µVj∥2H . (A.6)
Please note that the function Φ is convex.
OOD generalization
B DETAILS ON THE GATED DOMAIN UNITS
B.1 REAL-WORLD EXAMPLE: VISUALIZATIONS
As written in Section 2.1, we postulate that the elementary domain bases are the invariant subspaces that allow us to generalize to unseen domains. In practice, the question arises if and when elementary domains evolve. Consider that we aim to learn to predict the risk of developing Diabetes from laboratory data from Europe and then infer the risk from data from the United States of America. Naturally, factors influencing the data-generating process may change, such as the level of physical activity and nutritional habits. While, to a certain degree, these common factors remain invariant across continents, each of these factors’ contributions may differ. In terms of our assumptions, we model each of these factors with a corresponding elementary distribution. Figure 3 depicts our assumption and how it differs from existing works 2.
To exploit this assumption in out-of-distribution (OOD) generalization, we developed a modular neural network layer that consists of so-called Gated Domain Units (GDUs). In Figure 4, we visualized the fundamental concept of the GDUs. Each GDU learns an embedding of an individual elementary domain that allows us to encode the domain similarities during the training. During inference, the GDUs compute similarities between observation and each of the corresponding elementary distributions, which are then used to form a weighted ensemble of learning machines. In other words, for a previously unseen individual, we aim to determine the coefficients and quantify each factor’s contribution without any information about the individual’s origin.
B.2 DETAILED VIEW OF THE REGULARIZATION TERM ΩOLSD
First, consider the following single term ∥ϕ(x̃i)− ∑M j=1 βijµVj∥2H that can be expressed as
∥ϕ(x̃i)− M∑ j=1 βijµVj∥2H = ∥ϕ(x̃i)∥2H︸ ︷︷ ︸ (1) −2 ⟨ϕ(x̃i), M∑ j=1
βijµVj ⟩H︸ ︷︷ ︸ (2)
+ ∥ M∑ j=1
βijµVj∥2H︸ ︷︷ ︸ (3) . (B.1)
AD (1):
We begin with Term (1) and write ∥ϕ(x̃i)∥2H as ∥ϕ(x̃i)∥2H = ⟨ϕ(x̃i), ϕ(x̃i)⟩H = k(x̃i, x̃i). We could evaluate this term using the kernel function k for each data point in the batch b. However, since this
2Of note, Figure 3 is a complete fictive example, and we do not want to make medical implications in any way.
term does not depend on the the elementary domains {V1, . . . , VM}, it is unnecessary to compute this value to minimize the penalty. Thus, we obtain a similar result by minimizing the penalty without considering ∥ϕ(x̃i)∥2H in the regularization.
AD (2):
Term (2) can be expressed as
⟨ϕ(x̃i), M∑ j=1 βijµVj ⟩H = M∑ j=1 βij⟨ϕ(x̃i), µVj ⟩H (B.2)
Implementation-wise, the evaluation of this term requires the calculation of the inner product ⟨ϕ(x̃i), µVj ⟩H. Since our CS and projection-based methods involve this inner product to determine the coefficients βij , we pre-compute the inner product ⟨ϕ(x̃i), µVj ⟩H once for a mini-batch and store these information during training to avoid multiple calculations of the same term.
Moreover, the projection-based method does not apply softmax and has a linear form. Therefore, the term (2) can be simplified even further:
⟨ϕ(x̃i), M∑ j=1 βijµVj ⟩H = M∑ j=1 βij⟨ϕ(x̃i), µVj ⟩H (B.3)
= M∑ j=1 ⟨ϕ(x̃i), µVj ⟩H ∥µVj∥2H ⟨ϕ(x̃i), µVj ⟩H (B.4)
= M∑ j=1 ⟨ϕ(x̃i), µVj ⟩2H ∥µVj∥2H . (B.5)
AD (3):
Last, we express the term (3) as follows
∥ M∑ j=1 βijµVj∥2H = M∑ j=1 M∑ k=1 βijβik⟨µVj , µVk⟩H, (B.6)
and calculate the inner product of the domains ⟨µVj , µVk⟩H by
⟨µVj , µVk⟩H = 1
N2 N∑ l=1 N∑ m=1 ⟨ϕ(vlj), ϕ(vmk )⟩H (B.7)
= 1
N2 N∑ l=1 N∑ m=1 k ( vlj , v m k ) =: Kjk, (B.8)
where N represents the number of vectors per domain basis. Note that this term does not depend on the input data xi and, hence, matrix Kjk can be calculated once at the beginning of the optimization step and stored to be re-used for all the data point of a batch.
Combining Equation B.6 and Equation B.8 yields
∥ M∑ j=1 βijµVj∥2H = M∑ j=1 M∑ k=1 βijβik⟨µVj , µVk⟩H (B.9)
= 1
N2 M∑ j=1 M∑ k=1 βijβik N∑ l=1 N∑ m=1 k ( vlj , v m k ) (B.10)
= M∑ j=1 M∑ k=1 βijβikKjk (B.11)
= βTi Kjkβi . (B.12)
As a final step, we use the results for Term (1), (2), and (3) to obtain the desired regularization term
ΩOLSD = 1
b b∑ i=1 ( ∥ϕ(x̃i)− M∑ j=1 βijµVj∥2H )
(B.13)
= 1
b b∑ i=1 ( ∥ϕ(x̃i)∥2H − 2⟨ϕ(x̃i), M∑ j=1 βijµVj ⟩H + ∥ M∑ j=1 βijµVj∥2H ) . (B.14)
As mentioned above, ∥ϕ(x̃i)∥2H is independent from the elementary domains, and, thus a constant in the regularization. Hence, we can exclude this term, which avoids additional computational effort.
B.3 VISUALIZATION OF DG LAYER
Figure 5 depicts the layout of our DG layer.
C EXPERIMENTS
In this section, we provide a detailed description of the DG experiment presented in Section 5. Our Digits and ECG experiments are implemented using TensorFlow 2.4.1 and TensorFlow Probability 0.12.1. For the WILDS benchmarking we use our PyTorch (version 1.11.0). All source code will be made available on GitHub https://github.com/ (TensorFlow) and https://github. com/ (PyTorch). Overall, our experiments aim to show the validity of the invariant elementary distribution (I.E.D.) assumption and the Gated Domain Units (GDUs).
For the DG layer, we considered two modes of model training: fine tuning (FT) and end-to-end training (E2E). In FT scenario, we first pre-train the FE in the ERM single fashion. Then, we extract features using the pre-trained model and pass them to the DG layer for training the latter. For the E2E training, however, the whole model including the FE and DG layer is trained jointly from the very beginning.
C.1 DIGITS EXPERIMENT
Our experiment setup is closely related to Peng et al. (2019); Feng et al. (2020); Zhang et al. (2020); Zhao et al. (2018). Each dataset, except USPS, is split into training and test sets of 25,000 and 9,000 images, respectively. For USPS, we take the whole dataset for the experiment since it contains only 9,298 images3. Our experimental setup regarding datasets, data loader, and FE are based on existing work (Feng et al., 2020; Peng et al., 2019). The structure of the FE is summarized in Table 5 and the subsequent learning machine is a dense layer.
In the Empirical Risk Minimization (ERM) single experiment, we add a dense layer with 10 outputs (activation=tanh) as a classifier to the FE. In the Empirical Risk Minimization (ERM) ensemble experiment, we add M classification heads (a dense layers with 10 outputs and tanh activation each) to the FE and average their output for the final prediction. This sets a baseline for our DG layer to show performance gain against the ERM model with the same number of learning machines.
For training, we resorted to the Adam optimizer with a learning rate of 0.001. We used early stopping and selected the best model weights according to the validation accuracy. For the validation data, we used the combined test splits only of the respective source datasets. The batch size was set to 512. Although the DG layer requires more computation resources than the ERM models, all digits experiments were conducted on a single GPU (NVIDIA GeForce RTX 3090).
Heuristics for main parameter of DG layer From a practical perspective, our layer requires choosing two main hyper-parameters: the number of elementary domains M and since we use the characteristics Gaussian kernel the corresponding parameter σ. The parameter M determines the size of the ensemble of learning machines and, thus, for deep learning models, their overall network size. As a heuristic to choose M , we suggest to cluster the output of a pre-trained FE. In the following, we provide an example. We pre-trained the FE for the test domain MNIST-M and pass the source data through this FE, which we cluster with the k-means algorithm. Subsequently, we analyse three different metrics (Calinski Harabasz score, Davies Bouldinn score, and Silhouette score) to select the optimal number of clusters as the basis to choose M . All scores yielded an accordance between four to five clusters. Therefore, we set M to five and observed in Table 2 in Section 5 strong results in the generalizing to the unseen test domain MNIST-M.
3We used the digits data from https://github.com/FengHZ/KD3A [last accessed on 2022-05-17, available under MIT License.] published in Feng et al. (2020).
As for the parameter σ, we resort to the median heuristic proposed in (Muandet et al., 2016) that is σ2 = median{ ∥ x̃i − x̃j ∥2 : i, j = 1, . . . , n}. While both heuristics require a pre-trained FE, cross-validation can act as a reasonable alternative. The hyper-parameters relevant for the DG layer are summarized in Table 6. In the FT setting, we applied the median heuristics presented above to estimate σ of the Gaussian kernel function, where the estimator is denoted as σ̂. Since median heuristic is not applicable for the E2E scenario, σ was fixed to 7.5 for E2E.
Note that our approach to choose the relevant parameters was kept very general to show the feasibility of the I.E.D. assumption and the generalization ability of GDUs and, most importantly, to provide easy-to-reproduce results. During training, additional epoch metrics can be subscribed using our custom DG layer callback, which may help to choose the model parameters. Furthermore, we observed that the elementary domains become naturally orthogonal during the experiments, and thus, we set λORTH relatively small. Since the orthogonal regularization puts additional computational burden, one could omit this term completely to speed up training.
Digit-DG Benchmark In previous research, the aforementioned digits data is not only used for domain adaptation (DA), but also for domain generalization (DG) methods. For the latter, Zhou et al. (2021b) and Li et al. (2021) introduced Digit-DG dataset and the evaluation protocol to benchmark seven DG methods and ERM 4. Unlike the Digits experiment described above, Digit-DG dataset from Zhou et al. (2021b) and Li et al. (2021) consists of only four datasets (without USPS) and a different FE summarized in Table 8. Therefore, we follow their instructions to conduct a fair comparison and ensure reproducibility. For the hyper-parameters, however, we kept the same values that we used for the Digits experiment, see Table 6.
4Results were reported by Zhou et al. (2021b) and Li et al. (2021). Of note, both authors did not report the standard deviation on their results.
As a first method, we consider the CCSA (Classification and Contrastive Semantic Alignment) method, which learns a domain-invariant representation by utilizing the CCSA loss (Motiian et al., 2017). Second, MMD-AAE (Maximum Mean Discrepancy-based Adverserial Autoencoders) extends adverserial autoencoders by a maximum mean discrepancy regularization to learn a domain-invariant feature representation (Li et al., 2018b). CrossGrad (Cross-Gradient) augments data by perturbating the input space using the cross-gradients of a label and domain predictor (Shankar et al., 2018). Another augmentation-based DG method is L2A-OT (Learning to Augment by Optimal Transport) (Zhou et al., 2021b). Specifically, a data generator trained to maximize the optimal transport distance between source and pseudo domains, is used to augment the source data. All aforementioned methods rely on the availability of domain information such as domain labels. To benchmark our layer to a method for DG without domain information, we resort to the JiGen (Jigsaw puzzle based Generalization) method (Carlucci et al., 2019). JiGen introduces an auxiliary loss for solving jigsaw task during training. Further, we use the adaptive and non-adaptive stochastic feature augmentation (SFA-S and SFA-A, respectively) method proposed by Li et al. (2021). In principle, both method augment the latent feature embedding of a FE using random noise.
Our results are summarized in Table 9. As noted by Li et al. (2021), it is challenging to outperform augmentation-based DG methods. In addition, SFA-A and SF-S are computationally light (i.e., only adding random noise to the feature embedding) and do not require domain information (Li et al., 2021). Nevertheless, our layer achieves competitive results even against the strongest baselines in all DG tasks without requiring domain information.
C.2 ABLATION STUDY
C.2.1 MAIN COMPONENTS OF THE GATED DOMAIN UNIT
We chose the Digits dataset to conduct an ablation study, which is organized as follows: (1) ablation of the regularization terms presented in Section 3, (2) effect of the orthogonal regularization for projection-based generalization, and (3) affect on the FE’s output.
As a reminder, we introduced the regularization to be dependent on the form of generalization (i.e., domain similarity measures or projection-based generalization in Section 3). For the domain similarity measure case, the regularization is
ΩD ( ∥g∥H ) = λOLSΩ OLS D ( ∥g∥H ) + λL1Ω L1 D (∥γ∥), (C.1)
where λOLS , λL1 ≥ 0. In the case of projection, the regularization is given by ΩD ( ∥g∥H ) = λOLSΩ OLS D ( ∥g∥H ) + λORTHΩ ⊥ D ( ∥g∥H ) (C.2)
with λOLS , λORTH ≥ 0. Although one can additionally choose the sparse regularization in projection-based generalization, we set the focus in the ablation study on the two main regularization terms that are the OLS and orthogonal regularization. For (1) we vary in Equation C.1 and Equation C.2 the corresponding weights λ1 and λ2 in the interval of [0; 0.1] and display the mean classification accuracy for the most challenging classification task of MNSIT-M in the form of a heatmap. In
Figures 6-8, we see that the classification accuracy remains on an overall similar level which indicates that the DG layer is not very sensitive to the hyper-parameter change for MNIST-M as the test domain. Nevertheless, we observe that ablating the regularization terms by setting the corresponding weights to zero decreases the classification results and the peaks in performance occur when the regularization is included during training of the DG layer.
Applying the DG layer comes with additional overhead, especially the regularization that ensures the orthogonality of the elementary domain bases. This additional effort raises a question whether ensuring the theoretical assumptions outweigh the much higher computational effort. Thus, in a second step, we analyze how the orthogonal regularization affects the orthogonality of the elementary domain bases (i.e., spectral restricted isometry property (SRIP) value) and the loss function (i.e., categorical cross-entropy).
In Figure 10, we depict the mean and standard deviation of the SRIP value and loss over five runs for 40 epochs. The SRIP value can be tracked during training with the DG layer’s callback functionalities. First, we observe that the elementary domains are almost orthogonal when initialized. Training the layer leads in the first epochs to a decrease in orthogonality. This initial decrease happens because
cross-entropy has a stronger influence on the optimization than regularization in the first epochs. After five epochs, the cross-entropy decrease to a threshold when the regularization becomes more effective and the orthogonality of the elementary domain bases increases again. In Figure 10, we also observe that ablating the orthogonal regularization, while leading to better orthogonality of the domains, does not significantly affect the overall cross-entropy during training.
Finally, we project the output of the FE trained with a dense layer (ERM) and with the DG layer by t-SNE (t-distributed stochastic neighbor embedding) in Figure 11. The GDU-trained FE yields more concentrated and bounded clusters in comparison to the one trained by ERM. Hence, we observe a positive effect on the representation learned by the FE.
C.2.2 INTERPRETATION OF THE ELEMENTARY DOMAINS
We analyze the learned elementary domains in the digits experiment based on two visualizations, and choose the maximum mean discrepancy (MMD) as the similarity measure and MNIST-M as the test domain. The first visualization depicts the MMD between the datasets (i.e., MNIST, MNIST-M, SVHN, USPS, and Synthetic Digits (SYN)) and the learned elementary domains (i.e., V1 − V5) as a heatmap (see Figure 12 (left)). The heatmap indicates that the source and test domains are close to one another in terms of the MMD. Hence, we expect that their closeness reflects in the learning of the elementary domains. In other words, we expect that each elementary domains contributes similarly to the source and test domains (i.e., the coefficients β are similar for each of these domains). In Section 3.1, we derive the coefficients by applying a kernel softmax function to the negative MMD distances. Since the MMD distances between the source / test domains and the elementary domains are similar, the coefficients will be similar too. We conclude that the learned elementary domains represent the same distributional characteristics that existed among the source and test domains.
In the second visualisation, we show the t-SNE (t-distributed stochastic neighbor embedding) of the feature extractor output for each source and test domain alongside the elementary domains in Figure 12 (right). First, we observe that the learned elementary domain bases form distinctive clusters. We
see these clusters as a validation of our hypothesis that each GDU learns to mimic samples generated from a corresponding elementary distribution as pointed out in Section 2.2. However, we can not answer whether and where these elementary distributions occur in the real world. Moreover, these elementary distributions yet lack interpretability.
In summary, the MMD heatmap and t-SNE embeddings of the learned elementary and source domains on Figure 12 indicate that the GDUs learn to represent distributional structures in the dataset.
C.3 ECG EXPERIMENT
We adopted the task of multi-label binary classification of 12-lead electrocardiogram (ECG) signals combined from 6 different sources introduced in the PhysioNet/Computing in Cardiology Challenge 20205 (Perez Alday et al., 2021; Goldberger et al., 2000; Perez Alday et al., 2020). Each ECG recordings is annotated with 24 binary labels indicating whether or not a certain cardiac abnormality is present. The data is aggregated from 6 different databases and contains 43,101 recordings sampled
5https://physionetchallenges.org/2020/ [last accessed on 2021-03-10, available under Creative Commons Attribution 4.0 International Public License].
with various sampling frequencies, number of subjects, and lengths. Table 10 summarizes most important details about the data sources for this experiment.
According to the original challenge score, we measure the performance in terms of the generalized Intersection-over-Union (IoU) score where partial credit is assigned to misdiagnoses that result in similar treatments or outcomes. The score is defined as
score := yT ·W · ŷ
y ∪ ŷ , (C.3)
where y, ŷ ∈ {0, 1}24 represent actual labels and predicted labels and W stands for the partial credit-assignment matrix provided as a part of the challenge description. Note that in case of identity matrix W the score is exactly the Intersection-over-Union (IoU) score. The score is then adjusted for a solution ymajority, which always predicts the normal/majority class, and is moreover normalized
V0 V1 V2 V3 V4 MNISTMNISTM SVHN SYN USPS
V 0
V 1
V 2
V 3
V 4
M N IS T M N IS T M S V H N
S Y N
U S P S
0
0.14 0
1.4 1.6 0
1.6 1.3 1.7 0
1.7 1.6 0.68 1.6 0
0.34 0.57 0.71 1.3 1.2 0
0.1 0.33 1.1 1.5 1.5 0.18 0
0.13 0.36 1.1 1.5 1.5 0.24 0.038 0
0.19 0.42 0.88 1.4 1.3 0.12 0.052 0.047 0
0.27 0.51 0.77 1.3 1.2 0.029 0.12 0.17 0.079 0
MMD heatmap for: MMD
−100 −75 −50 −25 0 25 50 75 100
−100
−50
0
50
100
150
t-SNE embedding for: MMD
mnist mnistm svhn syn usps V0 V1 V2 V3 V4
V0 V1 V2 V3 V4 MNISTMNISTM SVHN SYN USPS
V 0
V 1
V 2
V 3
V 4
M N IS T M N IS T M S V H N
S Y N
U S P S
0
0.59 0
1.1 1.6 0
1.6 0.96 1.6 0
1.6 1.6 0.8 1.5 0
0.3 0.88 0.52 1.3 1.1 0
0.058 0.74 0.85 1.5 1.4 0.18 0
0.085 0.78 0.86 1.5 1.4 0.24 0.038 0
0.14 0.79 0.65 1.3 1.2 0.12 0.052 0.047 0
0.23 0.85 0.56 1.3 1.1 0.029 0.12 0.17 0.079 0
MMD heatmap for: CS
−100 −75 −50 −25 0 25 50 75 100
−100
−75
−50
−25
0
25
50
75
100 t-SNE embedding for: SIMILARITY
mnist
mnistm svhn syn usps V0 V1 V2 V3 V4
V0 V1 V2 V3 V4 MNISTMNISTM SVHN SYN USPS
V 0
V 1
V 2
V 3
V 4
M N IS T M N IS T M S V H N
S Y N
U S P S
0
1.9 0
0.51 1.2 0
1.8 0.98 1.1 0
0.55 1.2 0.13 1 0
0.63 1.4 0.53 1.4 0.48 0
0.96 1.6 0.84 1.6 0.79 0.18 0
0.97 1.6 0.84 1.7 0.79 0.24 0.038 0
0.73 1.5 0.62 1.5 0.57 0.12 0.052 0.047 0
0.68 1.5 0.58 1.4 0.54 0.029 0.12 0.17 0.079 0
MMD heatmap for: PROJECTED
−60 −40 −20 0 20 40 60 80
−60
−40
−20
0
20
40
t-SNE embedding for: PROJECTED
mnist mnistm svhn syn usps V0 V1 V2 V3 V4
0.0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
1.6
0.0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
1.6
0.00
0.25
0.50
0.75
1.00
1.25
1.50
1.75
MNIST-M
Figure 12: MMD heatmap (left) and t-SNE embedding (right) for the test domain MNIST-M.
29
for the perfect solution y. Therefore, the final score can have negative values and the best possible score of 1 and is formalized as
adjusted score := score(y, ŷ)− score(y, ymajority) score(y, y)− score(y, ymajority) . (C.4)
As a pre-processing step, we down-sampled all the signals to 125 Hz and applied Z-score, random amplification and random stretching according to Vicar et al. (2020). For that we partially adopted the code provided by the authors6. Additionally, we cropped each signal to its first 15,000 points if the signal was too long (mostly applied to INCART database). Each dataset was randomly split into train and validation parts with 3:1 ratio. During each experiment, we used the train splits of 5 databases for training and utilized the validation splits of the training databases for early stopping. The hold-out 6-th database was used for inference and testing only.
Table 11 describes the architecture of FE used for the task. Since the provided ECG recordings have different lengths, we used TensorFlow padded batching, which is padding all the recordings in a batch to the length of the longest sequence in the batch. Therefore, input from different batches can have different lengths so the spatial dimensions of the 1D-Convolutional layers are not predefined and are presented as *.
FEATURE EXTRACTOR LAYER TYPE OUTPUT SHAPE
We used the Adam optimizer to optimize weighted binary cross-entropy loss defined as −(wpos · y · log ŷ) + (1− y) · log (1− ŷ). Positive weights wpos are defined per class based on the training split data inversely proportional to the frequency of positive labels for each class. A learning rate was initially set to 0.001 and during the training reduced by the factor of 0.2 if the training loss was not improving for 10 epochs. We also applied early stopping and restored model weights to the best model according to the validation accuracy after the training end. Since each input samples for this experiment have a larger size than the previous one, we decreased the batch size to 64. Each ECG experiment was performed on a single GPU (Nvidia GTX 1080 Ti). The parameters relevant for the DG layer are summarized in Table 12. We have to emphasize that we did not perform extensive hyper-parameter tuning since our goal was to show the feasibility of the I.E.D. assumption and GDUs while keeping the experiments reproducible.
6https://github.com/tomasvicar/BUTTeam [last accessed on 2022-05-17, available under BSD 2-Clause License].
C.4 WILDS BENCHMARKING EXPERIMENTS
For comparison of our approach and benchmarking, we followed the standard procedure of WILDS experiments, described in Koh et al. (2021). As a technical note, all WILDS experiments have been implemented in Pytorch (version >= 1.7.0) based on the specifications made in Koh et al. (2021) and their code published on https://github.com/p-lambda/wilds [last accessed on 2022-05-17, available under MIT License]. The results for the benchmarks were retrieved from the official leaderboard https://wilds.stanford.edu/leaderboard/ [last accessed on 2022-09-26].
Camelyon17 In medical applications, the goal is to apply models trained on a comparatively small set of hospitals to a larger number of hospitals. For this application, we study images of tissue slides under a microscope to determine whether a patient has cancer or not. Shifts in patient populations, slide staining, and image acquisition can impede model accuracy in previously unseen hospitals. Camelyon17 comprises images of tissue patches from five different hospitals. While the first three hospitals are the source domains (302,436 examples), the forth and fifth are the validation (34,904 examples) and test domain (85,054 examples), respectively.
We deviate from the specifications made in (Koh et al., 2021) in terms of the FE. We use the FE from Feng et al. (2020); Peng et al. (2019) since we observed a higher mean accuracy and faster training than with the by Koh et al. (2021) originally proposed DenseNet-121 FE (Huang et al., 2017). We trained the FE from scratch. Both, ERM and the DG were trained over 250 epochs with early stopping, a learning rate of 0.001, which is reduced by a factor of 0.2 if the cross-entropy loss has not improved after 10 epochs. All results were aggregated over ten runs.
FMoW Analyzing satellite images with machine learning (ML) models may enable novel possibilities in tackling global sustainability and economic challenges such as population density mapping and deforestation tracking. However, satellite imagery changes over time due to human behavior (e.g., infrastructure development), and the extent of change is different in each region. The Functional Map of the World (FMoW) dataset consists of satellite images from different continents and years: training (76,863 images; between 2002–2013), validation (19,915 images; between 2013 and 2016), and test (22,108 images, between 2016–2017). The objective is to determine one of 62 building types (e.g., shopping malls) and land-use.
As instructed in Koh et al. (2021), we used the DenseNet-121 pre-trained on ImageNet without L2-regularization. For the optimization, we use the Adam optimizer with a learning rate of 1e-4, which is decayed by a factor of 0.96 per epoch. The models were trained for 50 epochs with early stopping and a batch size of 64. Additionally, we report the worst-region accuracy, which is a specific metric used for FMoW. This worst-region accuracy reports the worst accuracy across the following regions: Asia, Europe, Africa, America, and Oceania (see Koh et al. (2021) for the details). Again, we report the results over three runs.
Amazon Recent research shows that consumer-facing machine learning application large performance disparities across different set of users. To study this performance disparities, WILDS (Koh et al., 2021) leverages a variant of the Amazon Review dataset. The Aamazon-WILDS dataset is composed of data from 3,920 domains (number of reviewers) and the task is a multi-class sentiment classification, where the model receives a review text and has to predict the rating from one to five.
To split this dataset, a between training, validation, and test disjoint set of reviewers is used: training (245,502 reviews from 1,252 reviewers), validation (100,050 reviews from 1,334 reviewers), test (100,050 reviews from 1,334 reviewers).
For the experiments and baseline models | 1. What is the focus and contribution of the paper on domain generalization?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its assumption and evaluation?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any concerns or questions regarding the paper's methodology or assumptions? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
Based on an assumption about invariant elementary distributions, the authors proposed a method, namely Gated Domain Units, to recover those elementary distributions and leverage them for the domain generalization problem.
Strengths And Weaknesses
In my opinion, the main weaknesses of the paper are the assumption of I.E.D (which the authors did discuss briefly) and the evaluation of the method. Details are in the section below.
Clarity, Quality, Novelty And Reproducibility
I find it hard to judge the novelty of the method, given that it is mainly based on the I.E.D assumption. We need more discussion in the rebuttal to understand the assumption. |
ICLR | Title
Gated Domain Units for Multi-source Domain Generalization
Abstract
Distribution shift (DS) is a common problem that deteriorates the performance of learning machines. To tackle this problem, we postulate that real-world distributions are composed of elementary distributions that remain invariant across different environments. We call this an invariant elementary distribution (I.E.D.) assumption. The I.E.D. assumption implies an invariant structure in the solution space that enables knowledge transfer to unseen domains. To exploit this property in domain generalization (DG), we developed a modular neural network layer that consists of Gated Domain Units (GDUs). Each GDU learns an embedding of an individual elementary distribution that allows us to encode the domain similarities during the training. During inference, the GDUs compute similarities between an observation and each of the corresponding elementary distributions which are then used to form a weighted ensemble of learning machines. Because our layer is trained with backpropagation, it can naturally be integrated into existing deep learning frameworks. Our evaluation on image, text, graph, and time-series data shows a significant improvement in the performance on out-of-training target domains without domain information and any access to data from the target domains. This finding supports the practicality of the I.E.D. assumption and demonstrates that our GDUs can learn to represent these elementary distributions.
1 INTRODUCTION
A fundamental assumption in machine learning is that training and test data are independently and identically distributed (I.I.D.). This assumption ensures consistency-results from statistical learning theory, meaning that the learning machine obtained from an empirical risk minimization (ERM) attains the lowest achievable risk as sample size grows (Vapnik, 1998; Schölkopf, 2019). Unfortunately, a considerable amount of research and real-world applications in the past decades has provided a staggering evidence against this assumption (Zhao et al., 2018; 2020; Ren et al., 2019; Taori et al., 2020) (see D’Amour et al. (2020) for case studies). The violation of the I.I.D. assumption is usually caused by a distribution shift (DS) and can result in inconsistent learning machines (Sugiyama & Kawanabe, 2012), implying the loss of performance guarantee of machine learning models in the real world. Therefore, to tackle DS, recent work advocates for domain generalization (DG) (Blanchard et al., 2011; Muandet et al., 2013; Li et al., 2017; 2018b; Zhou et al., 2021a). This generalization to utterly unseen domains is crucial for robust deployment of the models in practice, especially when new, unforeseeable domains emerge after model deployment. However, the most important question that DG seeks to answer is how to identify the right invariance that allows for generalization.
The contribution of this work is twofold. First, we advocate that real-world distributions are composed of smaller “units” called invariant elementary distributions that remain invariant across different domains; see Section 2.1. Second, we propose to implement this hypothesis through so-called gated domain units (GDUs). Specifically, we developed a modular neural network layer that consists of GDUs. Each GDU learns an embedding of an individual elementary domain that allows us to express the domain similarities during training. For this purpose, we adopt the theoretical framework of reproducing kernel Hilbert space (RKHS) to retrieve a geometrical representation of each distribution in the form of a kernel mean embedding (KME) without information loss (Berlinet & Thomas-Agnan, 2004; Smola et al., 2007; Sriperumbudur et al., 2010; Muandet et al., 2017). This representation accommodates methods based on analytical geometry to measure similarities between distributions.
We show that these similarity measures can be learned and utilized to improve the generalization capability of deep learning models to previously unseen domains.
The remainder of this paper is organized as follows: Our theoretical framework is laid out in Section 2 with our modular DG layer implementation shown in Section 3. In Section 4, we outline related work. Our experimental evaluations are presented in Section 5. Finally, we discuss potential limitations of our approach and future work in Section 6.
2 DOMAIN GENERALIZATION WITH INVARIANT ELEMENTARY DISTRIBUTIONS
We assume a mixture component shift for the multi-source DG setting. This shift refers to the most common DS stating that the data is made up of different sources, each with its own characteristics, and their proportions vary between the training and test scenario (Quinonero-Candela et al., 2022). Our work thus differs in the assumption from related work in DG, in which the central assumption is the covariate shift (i.e., the conditional distribution of the source and test data stays the same) (David et al., 2010). In the following, let X and Y be the input and output space, with a joint distribution P. We are given a set of D labeled source datasets {Dsi }Di=1 with Dsi ⊆ X × Y . Each of the source datasets is assumed to be I.I.D. generated by a joint distribution Psi with support on X ×Y , henceforth denoted domain. The set of probability measures with support on X × Y is denoted by P . The multi-source dataset Ds comprises the merged individual source datasets {Dsj}Dj=1. We aim to minimize the empirical risk, see Section 3.3 for details. Important notation is summarized in Table 1.
2.1 INVARIANT ELEMENTARY DISTRIBUTIONS
Similar to Mansour et al. (2009; 2012); Hoffman et al. (2018a), we assume that the distribution of the source dataset can be described as a convex combination Ps = ∑D j=1 α s jPsj where αs = (αs1, . . . , α s D) is an element of the probability simplex, i.e.,
αs ∈ ∆D := {α ∈ RD |αj ≥ 0 ∧ ∑D
j=1 αj = 1}. In other words, αj quantifies the contribution of each individual source domain to the combined source domain.
In contrast, we generalize their problem descriptions: We express the distribution of each domain as a convex combination of K elementary distributions {Pj}Kj=1 ⊂ P , meaning that Ps = ∑K j=1 αjPj where α ∈ ∆K . Our main assumption is that these elementary distributions remain invariant across the domains. The advantage is that we can find an invariant subspace at a more elementary level, as opposed to when we consider the source domains as some sort of basis for all unseen domain. Figure 1 illustrates this idea.
Theoretically speaking, the I.E.D assumption is appealing because it implies the invariant structure in the solution space, as shown in the following lemma. The proof is given in Appendix A.1. Lemma 1. Let L : Y × Y → R+ be a non-negative loss function, F a hypothesis space of functions f : X → Y , and Ps(X,Y ) a data distribution. Suppose that the I.E.D assumption holds, i.e., there exist K elementary distributions P1, . . . ,PK such that any data distribution can be expressed as Ps(X,Y ) = ∑K j=1 αjPj(X,Y ) for some α ∈ ∆K . Then, the corresponding Bayes predictor f∗ ∈ argminf∈F E(X,Y )∼P[L(Y, f(X))] is Pareto-optimal with respect to a vector of elementary risk functionals (R1, . . . , RK) : F → RK+ where Rj(f) := E(X,Y )∼Pj [L(Y, f(X))].
Lemma 1 implies that, under the I.E.D assumption, Bayes predictors must belong to a subspace of F called the Pareto set FPareto ⊂ F which consists of Pareto-optimal models. The model f is said to be Pareto-optimal if there exists no g ∈ F such that Rj(g) ≥ Rj(f) for all j ∈ {1, . . . ,K} with Rj(g) > Rj(f) for some j; see, e.g., Sener & Koltun (2018, Definition 1). In other words, the I.E.D assumption allows us to translate the invariance property of data distributions to the solution space. Since Bayes predictors of all future test domains must lie within the Pareto set, which is a
strict subset of the original hypothesis space, it is still possible to identify the optimal predictors of future test domains, even without additional data from the test domains, except the I.E.D. assumption itself. Hence, given data from the training domains, it is sufficient for the purpose of generalization to maintain only solutions within this Pareto set during the training time.
Unfortunately, neither the elementary distributions nor the weights α are known in practice. Motivated by this theoretical insight, our DG layer presented in Section 3 is designed to uncover them from a multi-source training dataset Ds. While Lemma 1 shows the theoretical appeal of the I.E.D. assumption, we discuss below a situation in which it might hold in practice. The limitations will be discussed later in Section 6.
Real-world example. In this work, we postulate that the elementary domain bases are the invariant subspaces that allow us to generalize to unseen domains. In practice, the question arises if and when elementary domains evolve. Consider that we aim to learn to predict the risk of developing Diabetes from laboratory data from Europe and then infer the risk from data from the United States of America. Naturally, factors influencing the data-generating process may change, such as the level of physical activity and nutritional habits. While, to a certain degree, these common factors remain invariant across continents, each of these factors’ contributions may differ. In terms of our assumptions, we model each of these factors with a corresponding elementary distribution Pj . For a previously unseen individual, we can then determine the coefficients αsj and quantify each factor’s contribution without any information about the individual’s origin.
2.2 KERNEL MEAN EMBEDDING OF DISTRIBUTIONS
We leverage the KME of distributions (Berlinet & Thomas-Agnan, 2004; Smola et al., 2007; Muandet et al., 2017) to discover the elementary distributions and evaluate similarities between them. Let H be a reproducing kernel Hilbert space (RKHS) of real-valued functions on X with a reproducing kernel k : X × X → R (Schölkopf et al., 2001). The KME of a probability measure P ∈ P in the RKHS H is defined by a mapping ϕ(P) = µP := ∫ X k(x, ·) dP(x). We assume that the kernel k is characteristic, i.e., the mapping µP is injective (Fukumizu et al., 2004; Sriperumbudur et al., 2008). Theoretically, this essential assumption ensures that there is no information loss when mapping the distribution into H. Given the samples {x1, . . . , xn} generated I.I.D. from P, µP can be approximated by the empirical KME µ̂P = (1/n) ∑n i=1 k(xi, ·) = (1/n) ∑n i=1 ϕ(xi). We refer non-expert readers to Muandet et al. (2017) for a thorough review on this topic.
Challenges. Figure 1 depicts two challenges that come with our assumption of elementary distributions. First, since we do not have access to the samples from the hidden elementary distributions, the elementary KME cannot be estimated directly from the samples at hand. To overcome this challenge, we instead seek a proxy KME µVj := (1/N) ∑N k=1 ϕ(v j k) = (1/N) ∑N k=1 k(v j k, ·) for each elementary KME µPj from a domain basis Vj , where Vj = {v j 1, . . . , v j N} ⊆ X for all j ∈ {1, . . . ,M}. Hence, the KME µVj can be interpreted as the KME of the empirical probability measure P̂Vj = (1/N) ∑N k=1 δvjk
. Here, we assume that M = K. The sets Vj are referred to as elementary domain basis. Intuitively, the elementary domain basis V1, . . . , VM represents each elementary distribution by a set of vectors that mimic samples generated from the corresponding distribution. In Figure 1, V1 and V2 as well as their mapping in H visualize this first challenge. The second challenge is the objective of learning the unknown similarity between a single sample xi and an elementary domain Vj , which we denote by βij . Considering the advantage of KMEs, that is to tackle this challenge from a geometrical viewpoint, we quantify similarities between KMEs. For example, in Figure 1, the similarity between ϕ(xi) and µV1 (βi1) and µV2 (βi2) could be quantified as their distance or angle. These similarity coefficients enable our Domain Generalization Layer to represent a convex combination of elementary domain-specific learning machines, commonly known as ensembles. We introduce our layer in the following Section 3.
3 DOMAIN GENERALIZATION LAYER
This section aims to transfer the theoretical ideas presented in Section 2 into a deep learning framework. For the purpose of implementation, let x ∈ Rh×w denote the input data point and hξ : Rh×w → Re the feature extractor (FE) that maps the input into a low-dimensional representation x̃ ∈ Re. Then the prediction layer gθ : Re → Y infers the label y. To tackle the DG problem, we introduce a layer module called the gated domain unit (GDU). A GDU consists of three main components: (1) a similarity function γ : H×H → R that is the same for all elementary domains, (2) an elementary basis Vj and (3) a learning machine f(x̃, θj) for each elementary domain j ∈ {1, . . . ,M}. The architecture of the layer proposed herein is depicted in Figure 2.
Essentially, the process is as follows: First, the j-th GDU takes x̃i as an input and yields βij as an output. The KME of each domain basis Vj is required in order to apply γ to compute similarity between x̃i and Vj . These KMEs are obtained by ϕ(Vj) := µVj = (1/N) ∑N k=1 ϕ(v j k) =
(1/N) ∑N
k=1 k(v j k, ·). The GDU, therefore, has the task to allocate coefficients βij for each ele-
mentary domain based on a similarity function γ. The function γ outputs the βij = γ(ϕ(x̃i), µVj ) coefficients that in turn represent similarities between the KME of both, the corresponding domain basis Vj and the input x̃i. Theoretically speaking, µVj and the feature mapping ϕ(x̃i) are elements of the associated RKHS H, which allow us to evaluate similarities of non-linear features in a higher dimensional feature space. Each GDU is then connected to a learning machine f(x̃i, θj) that yields an elementary domain-specific inference. The final prediction of the layer is then an ensemble of
these learning machines gθ(x̃i) = ∑M
j=1 βijf(x̃i, θj) where θ = (θ1, . . . , θM ). In Figure 2, we give an overview of how data is processed and information is stored in the GDU.
In summary, GDUs leverage the invariant elementary distribution (I.E.D.) assumption and represent our algorithmic contribution: The elementary domain bases are stored as weights in the layer. Storing information as a weight matrix (i.e., domain memory) allows to learn the elementary domain bases efficiently using backpropagation. Hence, we avoid the dependency on problem-adaptive methods (e.g., domain-adversarial training) and domain information (e.g., domain labels).
3.1 DOMAIN SIMILARITY MEASURES
For the similarity function γ, we consider two similarity measures H(ϕ(x̃), µVj ), namely the cosine similarity (CS) (Kim et al., 2019) and maximum mean discrepancy (MMD) (Borgwardt et al., 2006; Gretton et al., 2012). To ensure that the resulting coefficients βi lie on the probability simplex, we apply the kernel softmax function (Gao et al., 2019) and interpret its output as the similarity between an observation x̃ and an elementary domain basis Vi. We get
βij = γ(ϕ(x̃i), µVj ) = exp
( κH(ϕ(x̃i), µVj ) )∑M k=1 exp ( κH(ϕ(x̃i), µVk)
) , (1) where κ > 0 is a positive softness parameter for the kernel softmax. Geometrically speaking, these similarities correspond to the angle and distance of two KMEs in the RKHS H. The function ϕ maps the observation x̃ and domain basis Vj into H meaning that ϕ(x̃) = µδx̃ = k(x̃, ·) is the KME of a Dirac measure δx̃ and ϕ(Vj) = µVj = (1/N) ∑N k=1 k(v j k, ·).
CS. The CS function H(ϕ(x̃i), µVj ) = ⟨ϕ(x̃i),µVj ⟩H
∥ϕ(x̃i)∥H∥µVj ∥H is used as an angle-based similarity.
MMD. We consider the MMD for calculating a distance-based similarity measure. The distance is then given as ∥ϕ(x̃i) − µVj∥H. Subsequently, the similarity function H is the negative MMD: H(ϕ(x̃i), µVj ) = −∥ϕ(x̃i)−µVj∥H. The intuition behind the negative MMD is to put higher weights on samples that are closer to the KME of an elementary domain basis.
3.2 PROJECTION-BASED GENERALIZATION
For classification tasks, we introduce an alternative approach to infer the βi coefficients that is based on the idea of kernel sparse coding (Gao et al., 2010; 2013). Herein the goal is to find an approximated representation of each feature mapping ϕ(x̃i) using the elements of a dictionary {µVj}Mj=1. This approach allows us to approximate the feature mapping with these elements by ϕ(x̃i) ≈ ∑M j=1 βijµVj . In contrast to the aforementioned approaches, an elementary domain KME µVj does not necessarily represent the KME of an elementary distribution µPj . Therefore, we present another approach that aims to find a set {µVj}Mj=1 that permits µPs to be represented as a linear combination. Since P is assumed to be a convex combination of elementary distributions, we can find a linear combination to represent µPs by the domain KMEs µVj , as long as µPs ∈ HM := span{µVj | j = 1, . . . ,M}. The RKHS HM is a subspace of the actual RKHS H, which allows us to represent elements of H at least approximately in the subspace HM . By keeping the HM large, we gain more representative power. To make HM as large as possible, we have to ensure its spanning elements are linearly independent or, even better, orthogonal. Orthogonal KMEs ensure two desirable properties. First, pairwise orthogonal elements in HM guarantee no redundancy. Second, having orthogonal elements allows us to make use of the orthogonal projection. This projection geometrically yields the best approximation of ϕ(x̃) in HM . In other words, we can achieve the best possible approximation of the feature mapping by using its orthogonal components (see Proposition 3.1). The orthogonal projection is given by
ΠHM : H → HM , ϕ(x̃) 7→ M∑ i=1 ⟨ϕ(x̃), µVj ⟩H ∥µVj∥2H µVj . (2)
Proposition 3.1. For a KME µP of a given mixture distribution P the following holds µP ∈ span{µVj |Vj ,∀j = 1, . . . ,M}, where ⟨µVi , µVj ⟩H = 0,∀i ̸= j (i.e., the KME of the elementary domains basis are pairwise orthogonal). The value of the function ∑M j=1∥µP − βjµVj∥2Hk is minimal if the coefficients are set as β∗j = ⟨µP, µVj ⟩H/∥µVj∥2H.
The Proposition 3.1 can be used to give an approximation of µP by projecting it into HM , i.e., µP ≈ ∑M j=1 βjµVj , where βj = ⟨µP, µVj ⟩H/∥µVj∥2H. This best approximation property is the main advantage of our assumption in Proposition 3.1 (i.e., having orthogonal KME) and thus a potential advantage of projection-based DG. Appendix A.2 provides the proof of Proposition 3.1.
3.3 MODEL TRAINING
For model training, we adapt the domain adaptation (DA) framework from Zhuang et al. (2021). Thus, our learning objective function is formalized as L(g) + λDΩD(∥g∥H). The goal of the training can be described in terms of the two components of this function. Consider a batch of training data {x1, . . . , xb}, where b is the batch size. During training, we minimize the loss function L(g) = 1b ∑b i=1 L(ŷi, yi) = 1 b ∑b i=1 L( ∑M j=1 γ(ϕ(x̃i), µVj )fj(x̃i), yi) for an underlying task and the respective batch size. In addition, our objective is that the model learns to distinguish between different domains. Thus, the regularization ΩD is introduced to control the domain basis. In our case, we require the regularization ΩD to ensure that the KMEs of the elementary domain basis are able to represent the KMEs of the elementary domains. Therefore, we minimize the MMD between the feature mappings ϕ(x̃i) and the associated representation ∑M j=1 βijµVj . Note that βij = γ(ϕ(x̃i), µVj ). Hence, the regularization ΩD = Ω OLS D is defined as Ω OLS D ( ∥g∥H ) = 1 b ∑b i=1 ∥ϕ(x̃i)− ∑M j=1 βijµVj∥2H (see Appendix B.2 for details). The intuition is the objective to represent each feature mapping ϕ(x̃i) by the domain KMEs µVj . Thus, we try to minimize the MMD between the feature map and a combination of µVj . The minimum of the stated regularization can be interpreted as the ordinary least square-solution of a regression-problem of ϕ(x̃i) by the components of HM . In other words, we want to ensure that the basis Vj is contained in feature mappings ϕ(x̃i). In the particular case of projection, we want the KME of the elementary domain to be orthogonal to ensure high expressive power. For this purpose, the additional term Ω⊥D will be introduced to ensure the desired orthogonality. Considering a kernel function with k(x, x) = 1, orthogonality would require the Gram matrix Kij = ⟨µVi , µVj ⟩H to be close to the identity matrix I . There are a variety of methods for regularizing matrices available (Xie et al., 2017; Bansal et al., 2018). A well-known method to ensure orthogonality is the soft orthogonality (SO) regularization Ω⊥D = λ∥K − I∥2F (Bansal et al., 2018). As pointed out by Bansal et al. (2018), the spectral restricted isometry property (SRIP) and mutual coherence (MC) regularization can be a promising alternative for SO and thus are additionally implemented in the DG layer. Hence, in the case of projection, the regularization is given by ΩD ( ∥g∥H ) = λOLSΩ OLS D ( ∥g∥H ) + λORTHΩ ⊥ D ( ∥g∥H ) , λOLS , λORTH ≥ 0.
Lastly, sparse coding is an efficient technique to find the least possible basis to recover the data subject to a reconstruction error (Olshausen & Field, 1997). Several such applications yield strong performances, for example in the field of computer vision (Lee et al., 2007; Yang et al., 2009). Kernel sparse coding transfers the reconstruction problem of sparse coding into H by using the mapping ϕ, and, by applying a kernel function, the reconstruction error is quantified as the inner product (Gao et al., 2010; 2013). To ensure sparsity, we apply the L1-norm on the coefficients β and add ΩL1D (∥γ∥) := ∥γ(ϕ(x̃i), µVj )∥1 to the regularization term ΩD with the corresponding coefficient λL1 . Appendix B.3 gives a visual overview of the model training.
4 RELATED WORK
DG, also known as out-of-distribution (OOD) generalization, is among the hardest problems in machine learning (Blanchard et al., 2011; Muandet et al., 2013; Arjovsky et al., 2019). In contrast, DA, which predates DG and OOD problems, deals with a slightly simpler scenario in which some data from the test distribution are available (Ganin et al., 2015). Hence, based on the available data, the task is to develop learning machines that transfer knowledge learned in a source domain specifically to the target domain. Approaches pursued in DA can be grouped primarily into (1) discrepancy-based
DA (Sun et al., 2016; Peng & Saenko, 2018; Ben-David et al., 2010; Fang et al., 2020; Tzeng et al., 2014; Long et al., 2015; Baktashmotlagh et al., 2016) (2) adversary-based DA (Tzeng et al., 2017; Liu & Tuzel, 2016; Ganin et al., 2015; Long et al., 2018), and (3) reconstruction-based DA (Bousmalis et al., 2016; Hoffman et al., 2018b; Kim et al., 2017; Yi et al., 2017; Zhu et al., 2017; Ghifary et al., 2014). In DA, learning the domain-invariant components requires access to unlabeled data from the target domain. Unlike problems in DA, where the observed data from the test domains can be used to find the most appropriate invariant structures (Ben-David et al., 2010), the lack thereof in DG calls for a postulation of invariant structure that will enable the OOD generalization.
To enable generalization to unseen domains without any access to data from them, researchers have made significant progress in the past decade and developed a broad spectrum of methodologies (Zhou et al., 2021a;c; Li et al., 2019; Blanchard et al., 2011). For thorough review see, e.g., Zhou et al. (2021a); Wang et al. (2021). Existing works can be categorized into methods based on domaininvariant representation learning (Muandet et al., 2013; Li et al., 2018b;d), meta-learning (Li et al., 2018a; Balaji et al., 2018), data augmentation (Zhou et al., 2020), to name a few. Another recent stream of research from a causal perspective includes invariant risk minimization (Arjovsky et al., 2019), invariant causal prediction (Peters et al., 2016), and causal representation learning (Schölkopf et al., 2021). The overall motivation here is to learn the representation that is robust to domain-specific spurious correlations. In other words, it is postulated that “causal” features are the right kind of invariance that will enable OOD generalization. Despite the successful applications, DG remains a challenging research gap.
We differentiate our work from existing ones as follow. First, we postulate the existence of domaininvariant structure at the distributional level rather than at the data representation, which is a common assumption in DG. This is motivated by theoretical results (Mansour et al., 2009; Hoffman et al., 2018a), stating that a distribution-weighted combination of source hypotheses represents the ideal hypothesis. Furthermore, our distributional assumption, as we argued in Section 2, generalizes previous work that proposes to use domain-specific knowledge to tackle the problem of DG from a more elementary setting. For example, approaches such as Piratla et al. (2020); Monteiro et al. (2021) can be compared to our GDUs as domain-specific predictors, in the special case, where each elementary domain represents a single source domain. However, GDUs do not assume the existence of a single common classifier for all the domains, providing a combination of multiple common classifiers shared between different source domains.
Second, we incorporate the I.E.D. assumption directly into our model’s architecture, as shown in Figure 2. Designing effective architectures for DG has been largely neglected (Zhou et al., 2020, Sec. 4.1). Last, we do not assume access to domain information. Although obtaining such information can be difficult in practice, see our short discussion in Appendix C.4 (Niu et al., 2017), DG methods that can deal with their absence (e.g., Huang et al. (2020); Carlucci et al. (2019); Li et al. (2018c)) are yet scarce (Zhou et al., 2020, Sec. 4.2).
5 EXPERIMENTS
Since ERM is one of the strongest baselines in DG (Gulrajani & Lopez-Paz, 2020; Koh et al., 2021), we, first, compare our approach compared to ERM and ensemble learning (Table 2 and Appendix C.1). Second, we benchmark our approach to state-of-the-art DG (e.g., CORAL, LISA, IRM, FISH, Group DRO) methods focusing on image, graph, and text data (Table 3 and Appendix C.4). Third, we analyse the GDUs robustness gainst DS that occurs in daily clinical practice (Table 12 and Appendix C.3). Finally, in Appendix C.2, we conduct an ablation study focusing on the representation learned during training (Appendix C.2.2). In our experiments, we distinguish two modes of training the DG layer: fine tuning (FT), where we extract features using a pre-trained model, and end-to-end training (E2E), where the FE and the DG layer are jointly trained1.
5.1 PROOF-OF-CONCEPT BASED ON DIGITS CLASSIFICATION
Following Feng et al. (2020) among others, we create a multi-source dataset by combining five publicly available digits image datasets, namely MNIST (Lecun et al., 1998), MNIST-M (Ganin & Lempitsky, 2015), SVHN (Netzer et al., 2011), USPS, and Synthetic Digits (SYN) (Ganin &
1All source code is made available on GitHub.
Lempitsky, 2015). The task is to classify digits between zero and nine. Each of these datasets is considered an out-of-training target domain which is inaccessible during training, and the remaining four are the source domains. Details are given in Appendix C.1. Table 2 summarizes the results for the most challenging out-of-training target domain, namely MNIST-M. In Appendix C.1, we provide the results on the remaining target domains in Table 7 and a discussion heuristics for choosing hyperparameters for our GDUs. Our method noticeably improves for all datasets mean accuracy and decreases the standard deviation in comparison to the ERM and ensemble baselines, making the results more stable across the ten iterations reported.
Table 2: Results Digits experiment. The mean (standard deviation) accuracy for ten runs is reported. Best results are bold.
MNIST-M
ERM Single 63.00 (3.20)
Ensemble 62.87 (1.50)
CS 68.55 (0.80)
FT MMD 68.62 (0.70)
PROJECTION 68.56 (0.91)
CS 69.25 (0.61)
E2E MMD 69.04 (0.83)
PROJECTION 68.67 (0.98)
We also compare our methods with related work that uses domain information and data augmentation, based on the results of Li et al. (2021) (Table 9 in Appendix C.1). Although data augmentation in DG is a comparatively strong approach and, at the same time, we do not use domain information; we obtain comparable results to the baselines reported by Li et al. (2021).
Ablation study We chose the digits dataset to analyze each component of our DG layer in 1st paragraph in Appendix C.1 and C.2. We (A) vary M , N on Figure 9, and the strength of the regularization terms on Figure 6, Figure 7, and Figure 8 to assess the sensitivity of the DG layer to the choice of hyperparameters, (B) visualize the output of the FE (Figure 11). Our ablation study in (A) reveals stable results across different sets of hyper-parameters. While the layer is not sensitive to the choice of regularization strength, we recommend not to omit the regularization completely, although the computational expenses decrease without the orthogonal regularization. As an illustration in (B), we project the output of the FE trained with a dense layer (ERM) and with the DG layer by t-SNE (t-distributed stochastic neighbor embedding). The GDU-trained FE yields more concentrated and bounded clusters in comparison to the one trained by ERM. Hence, we observe a positive effect on the representation learned by the FE.
5.2 WILDS BENCHMARK
To challenge the I.E.D. assumption and the OOD generalization capabilities of the GDUs, we use WILDS, a curated set of real-world experiments for benchmarking DG methods (Koh et al., 2021). Further, WILDS is a semi-synthetic dataset set that operates under similiar assumptions as the source component shift (Koh et al., 2021). We consider the following eight datasets: Camelyon17, FMoW, FMoW, Amazon, iWildCam, and RxRx1, OGB-MolPCBA, Civil-Comments, and PovertyMap, which represent the task of real-world DG. We closely follow Koh et al. (2021) for the experiments. Details on datasets and benchmark methods are given in Appendix C.4. We present our benchmarking in Table 3. Our results are achieved out-of-the-box (i.e., default parameters) since hyperparameter optimization has a substantial impact on the generalization performance (Gulrajani & Lopez-Paz, 2020), and we aim to highlight the improvements solely attributable to our GDUs.
First, we observe the strengths and weaknesses of the benchmarks in the different data sets, all of which are lower than ERM at least once. In contrast, although GDUs show similar behavior across the datasets, performing very well for some datasets (e.g., FMoW, Poverty Map), they, however, do not fall below ERM across all GDU experiments conducted. In addition, the baselines require domain information. Our approach requires less information, yet, achieving comparable results to the benchmarks.
5.3 ECG EXPERIMENT
The PhysioNet/Computing in Cardiology Challenge 2020 (Perez Alday et al., 2021; Goldberger et al., 2000; Perez Alday et al., 2020) aims to identify clinical diagnoses from 12-lead ECG recordings from 6 different databases. This publicly available pooled dataset contains 43,101 recordings sampled with various sampling frequencies and lengths. Each recording is labeled as having one or more of 24 cardiac abnormalities; hence, the task is to perform a multi-label binary classification. For our experiment, we iterate over the databases, taking one at a time as the test domain while utilizing
the remaining five databases for training. The performance was measured according to the original PhysioNet challenge score. This generalized intersection-over-union score assigns partial credit to misdiagnoses that result in similar treatments or outcomes. The score is then adjusted for a solution that always selects the normal/majority class and normalized for the perfect solution. Therefore, the score can have negative values and a best possible score of 1.
Table 4 reports results for the ECG experiments (see Appendix C.3 for details). For this clinical time-series data, we observe an improvement in mean score and a reduction in standard deviation over the ERM and ERM ensemble baselines across all DG tasks. We attribute poorer performance for the PTB dataset to the fact that it contains considerably longer recordings than other datasets (except for INCART which, however, contains only 75 samples) and a higher sampling rate (1000Hz vs. 500Hz and 257Hz). The negative challenge score for the PTB-XL dataset is due to the presence of previously unobserved labels in other datasets as well as a considerably smaller amount of data for training since the PTB-XL dataset comprises the majority of all samples (21,837 out of 43,101).
CS 0.1830 (0.0061) 0.2950 (0.0035) 0.1595 (0.0313) -8.8802 (0.1069) -0.1932 (0.0168) 0.1853 (0.0036)
FT MMD 0.1877 (0.0077) 0.3011 (0.0035) 0.2100 (0.0413) -8.8082 (0.1458) -0.1567 (0.0211) 0.1919 (0.0036)
6 CONCLUSION AND DISCUSSIONS
We introduced the I.E.D. assumption, postulating that real-world distributions are composed of elementary distributions that remain invariant across different domains and showed that it implies an invariant structure in the solution space that enables knowledge transfer to unseen domains. Empirical results based on real-world data support the practicality of the I.E.D. assumption and that we can learn such a representation. Further, we presented a modular neural network layer consisting of Gated
Domain Units (GDUs) that leverage the I.E.D. assumption. Our GDUs can substantially improve the downstream performance of learning machines in real-world DG tasks. Across our experiments, we observed that for some datasets FT is better than E2E and vice versa. In E2E training, the feature extractor (encoder) is jointly trained with GDUs. Hence, the latent representation is stochastic during training, meaning that we have variability in the representation fed into GDUs between epochs. In contrast, in FT, the feature extractor is pretrained and always produces the same embedding. Especially with large feature extractors such as ResNet-50, learning the elementary domains can be more effective when we avoid any stochasticity in the latent representation.
Limitations. A major limitation of our I.E.D. assumption is to provide theoretical evidence that this assumption holds in practice. We aim to expand the scope of the theoretical understanding of the I.E.D. assumption and the GDUs. In addition, the particular theoretical setting of Albuquerque et al. (2019) (i.e., each elementary domain represents a source domain) seems promising to extend their generalization guarantee to cases where our I.E.D. assumption holds. Second, our GDU layer induces additional computational overhead due to the regularization and model size that increases as a function of the number of elementary domains. Noteworthy, our improvement is achieved with a relatively small number of elementary domains indicating that the increased complexity is not a coercive consequence of applying the DG layer. Also, the results achieved are not a consequence of increased complexity, as the ensemble baseline shows.
Future work We expect the I.E.D. assumption and GDUs to be adapted, yielding novel applications that tackle DG. For example, we suggest dynamically increasing the number of elementary domains during learning until their distributional variance reaches a plateau as a measure of their heterogeneity. Hence, one would learn the number of elementary domains instead of fixing the number of elementary domains prior to training.
Appendices
Table of Contents
A Proofs 19
A.1 Proof of Lemma 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 A.2 Proof of Proposition 3.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
B Details on the Gated Domain Units 20
B.1 Real-world example: Visualizations . . . . . . . . . . . . . . . . . . . . . . . . 20 B.2 Detailed View of the Regularization Term ΩOLSD . . . . . . . . . . . . . . . . . 20 B.3 Visualization of DG Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
C Experiments 22
C.1 Digits Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 C.2 Ablation Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 C.3 ECG Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 C.4 WILDS Benchmarking Experiments . . . . . . . . . . . . . . . . . . . . . . . 31
A PROOFS
A.1 PROOF OF LEMMA 1
Proof. The result holds trivially for K = 1. For K ≥ 2 and by the I.E.D assumption, Ps(X,Y ) = ∑K j=1 αjPj(X,Y ) for some α ∈ ∆K . Then, we can write the risk functional
for each f ∈ F as R(f) = ∫ L(y, f(x)) dPs(x, y) = ∫ L(y, f(x)) d( ∑K j=1 αjPj(x, y)) =∑K
j=1 αj ∫ L(y, f(x)) dPj(x, y) = ∑K j=1 αjRj(f) where Rj : F → R+ is the elementary risk functional associated with the elementary distribution Pj(X,Y ). Hence, the Bayes predictors satisfy
f∗ ∈ argmin f∈F R(f) = argmin f∈F K∑ j=1 αjRj(f). (A.3)
Since the rhs of equation A.3 corresponds to the linear scalarization of a multi-objective function (R1, . . . , RK), its solution (i.e., a stationary point) is Pareto-optimal with respect to these objective functions (Ma et al., 2020, Definition 3.1); see, also, (Hillermeier, 2001a;b). That is, the Bayes predictors for the data distribution that satisfies the I.E.D assumption must belong to the Pareto set FPareto := {f∗ : f∗ = argminf∈F ∑K j=1 αjRj(f), α ∈ ∆K} ⊂ F .
A.2 PROOF OF PROPOSITION 3.1
Proof. Suppose we have a representation,
µP = M∑ j=1 βjµVj ⟨µVi , µVi⟩H = 0∀i ̸= j, (A.1)
i.e. {µV1 , . . . , µVm} are pairwise orthogonal. We want to minimize the MMD by minimizing
∥µP − M∑ j=1
βjµVj∥2H = ⟨µP, µP⟩H︸ ︷︷ ︸ ∥µP∥2H=
−2⟨µP, M∑ j=1 βjµVj ⟩H + ⟨ M∑ i=1 βiµVi , M∑ j=1 βjµVj ⟩H (A.2)
= ∥µP∥2H − 2 M∑ j=1 βj⟨µP, µVj ⟩H + M∑ i=1 M∑ j=1
βiβj ⟨µVi , µVj ⟩H︸ ︷︷ ︸ δij⟨µVi ,µVj ⟩H=
(A.3)
= ∥µP∥2H − 2 M∑ j=1 βj⟨µP, µVj ⟩H + M∑ j=1 β2j ∥µVj∥2H . (A.4)
By defining
Φ(β) := ∥µP − M∑ j=1 βjµVj∥2H , (A.5)
we can simply find the optimal βj by using the partial derivative
∂Φ ∂βj = −2⟨µP, µVj ⟩H + 2βj∥µVj∥2H ! = 0 (A.4)
⇔ βj∥µVj∥2H = ⟨µP, µVj ⟩H (A.5)
⇔ β∗j = ⟨µP, µVj ⟩H ∥µVj∥2H . (A.6)
Please note that the function Φ is convex.
OOD generalization
B DETAILS ON THE GATED DOMAIN UNITS
B.1 REAL-WORLD EXAMPLE: VISUALIZATIONS
As written in Section 2.1, we postulate that the elementary domain bases are the invariant subspaces that allow us to generalize to unseen domains. In practice, the question arises if and when elementary domains evolve. Consider that we aim to learn to predict the risk of developing Diabetes from laboratory data from Europe and then infer the risk from data from the United States of America. Naturally, factors influencing the data-generating process may change, such as the level of physical activity and nutritional habits. While, to a certain degree, these common factors remain invariant across continents, each of these factors’ contributions may differ. In terms of our assumptions, we model each of these factors with a corresponding elementary distribution. Figure 3 depicts our assumption and how it differs from existing works 2.
To exploit this assumption in out-of-distribution (OOD) generalization, we developed a modular neural network layer that consists of so-called Gated Domain Units (GDUs). In Figure 4, we visualized the fundamental concept of the GDUs. Each GDU learns an embedding of an individual elementary domain that allows us to encode the domain similarities during the training. During inference, the GDUs compute similarities between observation and each of the corresponding elementary distributions, which are then used to form a weighted ensemble of learning machines. In other words, for a previously unseen individual, we aim to determine the coefficients and quantify each factor’s contribution without any information about the individual’s origin.
B.2 DETAILED VIEW OF THE REGULARIZATION TERM ΩOLSD
First, consider the following single term ∥ϕ(x̃i)− ∑M j=1 βijµVj∥2H that can be expressed as
∥ϕ(x̃i)− M∑ j=1 βijµVj∥2H = ∥ϕ(x̃i)∥2H︸ ︷︷ ︸ (1) −2 ⟨ϕ(x̃i), M∑ j=1
βijµVj ⟩H︸ ︷︷ ︸ (2)
+ ∥ M∑ j=1
βijµVj∥2H︸ ︷︷ ︸ (3) . (B.1)
AD (1):
We begin with Term (1) and write ∥ϕ(x̃i)∥2H as ∥ϕ(x̃i)∥2H = ⟨ϕ(x̃i), ϕ(x̃i)⟩H = k(x̃i, x̃i). We could evaluate this term using the kernel function k for each data point in the batch b. However, since this
2Of note, Figure 3 is a complete fictive example, and we do not want to make medical implications in any way.
term does not depend on the the elementary domains {V1, . . . , VM}, it is unnecessary to compute this value to minimize the penalty. Thus, we obtain a similar result by minimizing the penalty without considering ∥ϕ(x̃i)∥2H in the regularization.
AD (2):
Term (2) can be expressed as
⟨ϕ(x̃i), M∑ j=1 βijµVj ⟩H = M∑ j=1 βij⟨ϕ(x̃i), µVj ⟩H (B.2)
Implementation-wise, the evaluation of this term requires the calculation of the inner product ⟨ϕ(x̃i), µVj ⟩H. Since our CS and projection-based methods involve this inner product to determine the coefficients βij , we pre-compute the inner product ⟨ϕ(x̃i), µVj ⟩H once for a mini-batch and store these information during training to avoid multiple calculations of the same term.
Moreover, the projection-based method does not apply softmax and has a linear form. Therefore, the term (2) can be simplified even further:
⟨ϕ(x̃i), M∑ j=1 βijµVj ⟩H = M∑ j=1 βij⟨ϕ(x̃i), µVj ⟩H (B.3)
= M∑ j=1 ⟨ϕ(x̃i), µVj ⟩H ∥µVj∥2H ⟨ϕ(x̃i), µVj ⟩H (B.4)
= M∑ j=1 ⟨ϕ(x̃i), µVj ⟩2H ∥µVj∥2H . (B.5)
AD (3):
Last, we express the term (3) as follows
∥ M∑ j=1 βijµVj∥2H = M∑ j=1 M∑ k=1 βijβik⟨µVj , µVk⟩H, (B.6)
and calculate the inner product of the domains ⟨µVj , µVk⟩H by
⟨µVj , µVk⟩H = 1
N2 N∑ l=1 N∑ m=1 ⟨ϕ(vlj), ϕ(vmk )⟩H (B.7)
= 1
N2 N∑ l=1 N∑ m=1 k ( vlj , v m k ) =: Kjk, (B.8)
where N represents the number of vectors per domain basis. Note that this term does not depend on the input data xi and, hence, matrix Kjk can be calculated once at the beginning of the optimization step and stored to be re-used for all the data point of a batch.
Combining Equation B.6 and Equation B.8 yields
∥ M∑ j=1 βijµVj∥2H = M∑ j=1 M∑ k=1 βijβik⟨µVj , µVk⟩H (B.9)
= 1
N2 M∑ j=1 M∑ k=1 βijβik N∑ l=1 N∑ m=1 k ( vlj , v m k ) (B.10)
= M∑ j=1 M∑ k=1 βijβikKjk (B.11)
= βTi Kjkβi . (B.12)
As a final step, we use the results for Term (1), (2), and (3) to obtain the desired regularization term
ΩOLSD = 1
b b∑ i=1 ( ∥ϕ(x̃i)− M∑ j=1 βijµVj∥2H )
(B.13)
= 1
b b∑ i=1 ( ∥ϕ(x̃i)∥2H − 2⟨ϕ(x̃i), M∑ j=1 βijµVj ⟩H + ∥ M∑ j=1 βijµVj∥2H ) . (B.14)
As mentioned above, ∥ϕ(x̃i)∥2H is independent from the elementary domains, and, thus a constant in the regularization. Hence, we can exclude this term, which avoids additional computational effort.
B.3 VISUALIZATION OF DG LAYER
Figure 5 depicts the layout of our DG layer.
C EXPERIMENTS
In this section, we provide a detailed description of the DG experiment presented in Section 5. Our Digits and ECG experiments are implemented using TensorFlow 2.4.1 and TensorFlow Probability 0.12.1. For the WILDS benchmarking we use our PyTorch (version 1.11.0). All source code will be made available on GitHub https://github.com/ (TensorFlow) and https://github. com/ (PyTorch). Overall, our experiments aim to show the validity of the invariant elementary distribution (I.E.D.) assumption and the Gated Domain Units (GDUs).
For the DG layer, we considered two modes of model training: fine tuning (FT) and end-to-end training (E2E). In FT scenario, we first pre-train the FE in the ERM single fashion. Then, we extract features using the pre-trained model and pass them to the DG layer for training the latter. For the E2E training, however, the whole model including the FE and DG layer is trained jointly from the very beginning.
C.1 DIGITS EXPERIMENT
Our experiment setup is closely related to Peng et al. (2019); Feng et al. (2020); Zhang et al. (2020); Zhao et al. (2018). Each dataset, except USPS, is split into training and test sets of 25,000 and 9,000 images, respectively. For USPS, we take the whole dataset for the experiment since it contains only 9,298 images3. Our experimental setup regarding datasets, data loader, and FE are based on existing work (Feng et al., 2020; Peng et al., 2019). The structure of the FE is summarized in Table 5 and the subsequent learning machine is a dense layer.
In the Empirical Risk Minimization (ERM) single experiment, we add a dense layer with 10 outputs (activation=tanh) as a classifier to the FE. In the Empirical Risk Minimization (ERM) ensemble experiment, we add M classification heads (a dense layers with 10 outputs and tanh activation each) to the FE and average their output for the final prediction. This sets a baseline for our DG layer to show performance gain against the ERM model with the same number of learning machines.
For training, we resorted to the Adam optimizer with a learning rate of 0.001. We used early stopping and selected the best model weights according to the validation accuracy. For the validation data, we used the combined test splits only of the respective source datasets. The batch size was set to 512. Although the DG layer requires more computation resources than the ERM models, all digits experiments were conducted on a single GPU (NVIDIA GeForce RTX 3090).
Heuristics for main parameter of DG layer From a practical perspective, our layer requires choosing two main hyper-parameters: the number of elementary domains M and since we use the characteristics Gaussian kernel the corresponding parameter σ. The parameter M determines the size of the ensemble of learning machines and, thus, for deep learning models, their overall network size. As a heuristic to choose M , we suggest to cluster the output of a pre-trained FE. In the following, we provide an example. We pre-trained the FE for the test domain MNIST-M and pass the source data through this FE, which we cluster with the k-means algorithm. Subsequently, we analyse three different metrics (Calinski Harabasz score, Davies Bouldinn score, and Silhouette score) to select the optimal number of clusters as the basis to choose M . All scores yielded an accordance between four to five clusters. Therefore, we set M to five and observed in Table 2 in Section 5 strong results in the generalizing to the unseen test domain MNIST-M.
3We used the digits data from https://github.com/FengHZ/KD3A [last accessed on 2022-05-17, available under MIT License.] published in Feng et al. (2020).
As for the parameter σ, we resort to the median heuristic proposed in (Muandet et al., 2016) that is σ2 = median{ ∥ x̃i − x̃j ∥2 : i, j = 1, . . . , n}. While both heuristics require a pre-trained FE, cross-validation can act as a reasonable alternative. The hyper-parameters relevant for the DG layer are summarized in Table 6. In the FT setting, we applied the median heuristics presented above to estimate σ of the Gaussian kernel function, where the estimator is denoted as σ̂. Since median heuristic is not applicable for the E2E scenario, σ was fixed to 7.5 for E2E.
Note that our approach to choose the relevant parameters was kept very general to show the feasibility of the I.E.D. assumption and the generalization ability of GDUs and, most importantly, to provide easy-to-reproduce results. During training, additional epoch metrics can be subscribed using our custom DG layer callback, which may help to choose the model parameters. Furthermore, we observed that the elementary domains become naturally orthogonal during the experiments, and thus, we set λORTH relatively small. Since the orthogonal regularization puts additional computational burden, one could omit this term completely to speed up training.
Digit-DG Benchmark In previous research, the aforementioned digits data is not only used for domain adaptation (DA), but also for domain generalization (DG) methods. For the latter, Zhou et al. (2021b) and Li et al. (2021) introduced Digit-DG dataset and the evaluation protocol to benchmark seven DG methods and ERM 4. Unlike the Digits experiment described above, Digit-DG dataset from Zhou et al. (2021b) and Li et al. (2021) consists of only four datasets (without USPS) and a different FE summarized in Table 8. Therefore, we follow their instructions to conduct a fair comparison and ensure reproducibility. For the hyper-parameters, however, we kept the same values that we used for the Digits experiment, see Table 6.
4Results were reported by Zhou et al. (2021b) and Li et al. (2021). Of note, both authors did not report the standard deviation on their results.
As a first method, we consider the CCSA (Classification and Contrastive Semantic Alignment) method, which learns a domain-invariant representation by utilizing the CCSA loss (Motiian et al., 2017). Second, MMD-AAE (Maximum Mean Discrepancy-based Adverserial Autoencoders) extends adverserial autoencoders by a maximum mean discrepancy regularization to learn a domain-invariant feature representation (Li et al., 2018b). CrossGrad (Cross-Gradient) augments data by perturbating the input space using the cross-gradients of a label and domain predictor (Shankar et al., 2018). Another augmentation-based DG method is L2A-OT (Learning to Augment by Optimal Transport) (Zhou et al., 2021b). Specifically, a data generator trained to maximize the optimal transport distance between source and pseudo domains, is used to augment the source data. All aforementioned methods rely on the availability of domain information such as domain labels. To benchmark our layer to a method for DG without domain information, we resort to the JiGen (Jigsaw puzzle based Generalization) method (Carlucci et al., 2019). JiGen introduces an auxiliary loss for solving jigsaw task during training. Further, we use the adaptive and non-adaptive stochastic feature augmentation (SFA-S and SFA-A, respectively) method proposed by Li et al. (2021). In principle, both method augment the latent feature embedding of a FE using random noise.
Our results are summarized in Table 9. As noted by Li et al. (2021), it is challenging to outperform augmentation-based DG methods. In addition, SFA-A and SF-S are computationally light (i.e., only adding random noise to the feature embedding) and do not require domain information (Li et al., 2021). Nevertheless, our layer achieves competitive results even against the strongest baselines in all DG tasks without requiring domain information.
C.2 ABLATION STUDY
C.2.1 MAIN COMPONENTS OF THE GATED DOMAIN UNIT
We chose the Digits dataset to conduct an ablation study, which is organized as follows: (1) ablation of the regularization terms presented in Section 3, (2) effect of the orthogonal regularization for projection-based generalization, and (3) affect on the FE’s output.
As a reminder, we introduced the regularization to be dependent on the form of generalization (i.e., domain similarity measures or projection-based generalization in Section 3). For the domain similarity measure case, the regularization is
ΩD ( ∥g∥H ) = λOLSΩ OLS D ( ∥g∥H ) + λL1Ω L1 D (∥γ∥), (C.1)
where λOLS , λL1 ≥ 0. In the case of projection, the regularization is given by ΩD ( ∥g∥H ) = λOLSΩ OLS D ( ∥g∥H ) + λORTHΩ ⊥ D ( ∥g∥H ) (C.2)
with λOLS , λORTH ≥ 0. Although one can additionally choose the sparse regularization in projection-based generalization, we set the focus in the ablation study on the two main regularization terms that are the OLS and orthogonal regularization. For (1) we vary in Equation C.1 and Equation C.2 the corresponding weights λ1 and λ2 in the interval of [0; 0.1] and display the mean classification accuracy for the most challenging classification task of MNSIT-M in the form of a heatmap. In
Figures 6-8, we see that the classification accuracy remains on an overall similar level which indicates that the DG layer is not very sensitive to the hyper-parameter change for MNIST-M as the test domain. Nevertheless, we observe that ablating the regularization terms by setting the corresponding weights to zero decreases the classification results and the peaks in performance occur when the regularization is included during training of the DG layer.
Applying the DG layer comes with additional overhead, especially the regularization that ensures the orthogonality of the elementary domain bases. This additional effort raises a question whether ensuring the theoretical assumptions outweigh the much higher computational effort. Thus, in a second step, we analyze how the orthogonal regularization affects the orthogonality of the elementary domain bases (i.e., spectral restricted isometry property (SRIP) value) and the loss function (i.e., categorical cross-entropy).
In Figure 10, we depict the mean and standard deviation of the SRIP value and loss over five runs for 40 epochs. The SRIP value can be tracked during training with the DG layer’s callback functionalities. First, we observe that the elementary domains are almost orthogonal when initialized. Training the layer leads in the first epochs to a decrease in orthogonality. This initial decrease happens because
cross-entropy has a stronger influence on the optimization than regularization in the first epochs. After five epochs, the cross-entropy decrease to a threshold when the regularization becomes more effective and the orthogonality of the elementary domain bases increases again. In Figure 10, we also observe that ablating the orthogonal regularization, while leading to better orthogonality of the domains, does not significantly affect the overall cross-entropy during training.
Finally, we project the output of the FE trained with a dense layer (ERM) and with the DG layer by t-SNE (t-distributed stochastic neighbor embedding) in Figure 11. The GDU-trained FE yields more concentrated and bounded clusters in comparison to the one trained by ERM. Hence, we observe a positive effect on the representation learned by the FE.
C.2.2 INTERPRETATION OF THE ELEMENTARY DOMAINS
We analyze the learned elementary domains in the digits experiment based on two visualizations, and choose the maximum mean discrepancy (MMD) as the similarity measure and MNIST-M as the test domain. The first visualization depicts the MMD between the datasets (i.e., MNIST, MNIST-M, SVHN, USPS, and Synthetic Digits (SYN)) and the learned elementary domains (i.e., V1 − V5) as a heatmap (see Figure 12 (left)). The heatmap indicates that the source and test domains are close to one another in terms of the MMD. Hence, we expect that their closeness reflects in the learning of the elementary domains. In other words, we expect that each elementary domains contributes similarly to the source and test domains (i.e., the coefficients β are similar for each of these domains). In Section 3.1, we derive the coefficients by applying a kernel softmax function to the negative MMD distances. Since the MMD distances between the source / test domains and the elementary domains are similar, the coefficients will be similar too. We conclude that the learned elementary domains represent the same distributional characteristics that existed among the source and test domains.
In the second visualisation, we show the t-SNE (t-distributed stochastic neighbor embedding) of the feature extractor output for each source and test domain alongside the elementary domains in Figure 12 (right). First, we observe that the learned elementary domain bases form distinctive clusters. We
see these clusters as a validation of our hypothesis that each GDU learns to mimic samples generated from a corresponding elementary distribution as pointed out in Section 2.2. However, we can not answer whether and where these elementary distributions occur in the real world. Moreover, these elementary distributions yet lack interpretability.
In summary, the MMD heatmap and t-SNE embeddings of the learned elementary and source domains on Figure 12 indicate that the GDUs learn to represent distributional structures in the dataset.
C.3 ECG EXPERIMENT
We adopted the task of multi-label binary classification of 12-lead electrocardiogram (ECG) signals combined from 6 different sources introduced in the PhysioNet/Computing in Cardiology Challenge 20205 (Perez Alday et al., 2021; Goldberger et al., 2000; Perez Alday et al., 2020). Each ECG recordings is annotated with 24 binary labels indicating whether or not a certain cardiac abnormality is present. The data is aggregated from 6 different databases and contains 43,101 recordings sampled
5https://physionetchallenges.org/2020/ [last accessed on 2021-03-10, available under Creative Commons Attribution 4.0 International Public License].
with various sampling frequencies, number of subjects, and lengths. Table 10 summarizes most important details about the data sources for this experiment.
According to the original challenge score, we measure the performance in terms of the generalized Intersection-over-Union (IoU) score where partial credit is assigned to misdiagnoses that result in similar treatments or outcomes. The score is defined as
score := yT ·W · ŷ
y ∪ ŷ , (C.3)
where y, ŷ ∈ {0, 1}24 represent actual labels and predicted labels and W stands for the partial credit-assignment matrix provided as a part of the challenge description. Note that in case of identity matrix W the score is exactly the Intersection-over-Union (IoU) score. The score is then adjusted for a solution ymajority, which always predicts the normal/majority class, and is moreover normalized
V0 V1 V2 V3 V4 MNISTMNISTM SVHN SYN USPS
V 0
V 1
V 2
V 3
V 4
M N IS T M N IS T M S V H N
S Y N
U S P S
0
0.14 0
1.4 1.6 0
1.6 1.3 1.7 0
1.7 1.6 0.68 1.6 0
0.34 0.57 0.71 1.3 1.2 0
0.1 0.33 1.1 1.5 1.5 0.18 0
0.13 0.36 1.1 1.5 1.5 0.24 0.038 0
0.19 0.42 0.88 1.4 1.3 0.12 0.052 0.047 0
0.27 0.51 0.77 1.3 1.2 0.029 0.12 0.17 0.079 0
MMD heatmap for: MMD
−100 −75 −50 −25 0 25 50 75 100
−100
−50
0
50
100
150
t-SNE embedding for: MMD
mnist mnistm svhn syn usps V0 V1 V2 V3 V4
V0 V1 V2 V3 V4 MNISTMNISTM SVHN SYN USPS
V 0
V 1
V 2
V 3
V 4
M N IS T M N IS T M S V H N
S Y N
U S P S
0
0.59 0
1.1 1.6 0
1.6 0.96 1.6 0
1.6 1.6 0.8 1.5 0
0.3 0.88 0.52 1.3 1.1 0
0.058 0.74 0.85 1.5 1.4 0.18 0
0.085 0.78 0.86 1.5 1.4 0.24 0.038 0
0.14 0.79 0.65 1.3 1.2 0.12 0.052 0.047 0
0.23 0.85 0.56 1.3 1.1 0.029 0.12 0.17 0.079 0
MMD heatmap for: CS
−100 −75 −50 −25 0 25 50 75 100
−100
−75
−50
−25
0
25
50
75
100 t-SNE embedding for: SIMILARITY
mnist
mnistm svhn syn usps V0 V1 V2 V3 V4
V0 V1 V2 V3 V4 MNISTMNISTM SVHN SYN USPS
V 0
V 1
V 2
V 3
V 4
M N IS T M N IS T M S V H N
S Y N
U S P S
0
1.9 0
0.51 1.2 0
1.8 0.98 1.1 0
0.55 1.2 0.13 1 0
0.63 1.4 0.53 1.4 0.48 0
0.96 1.6 0.84 1.6 0.79 0.18 0
0.97 1.6 0.84 1.7 0.79 0.24 0.038 0
0.73 1.5 0.62 1.5 0.57 0.12 0.052 0.047 0
0.68 1.5 0.58 1.4 0.54 0.029 0.12 0.17 0.079 0
MMD heatmap for: PROJECTED
−60 −40 −20 0 20 40 60 80
−60
−40
−20
0
20
40
t-SNE embedding for: PROJECTED
mnist mnistm svhn syn usps V0 V1 V2 V3 V4
0.0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
1.6
0.0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
1.6
0.00
0.25
0.50
0.75
1.00
1.25
1.50
1.75
MNIST-M
Figure 12: MMD heatmap (left) and t-SNE embedding (right) for the test domain MNIST-M.
29
for the perfect solution y. Therefore, the final score can have negative values and the best possible score of 1 and is formalized as
adjusted score := score(y, ŷ)− score(y, ymajority) score(y, y)− score(y, ymajority) . (C.4)
As a pre-processing step, we down-sampled all the signals to 125 Hz and applied Z-score, random amplification and random stretching according to Vicar et al. (2020). For that we partially adopted the code provided by the authors6. Additionally, we cropped each signal to its first 15,000 points if the signal was too long (mostly applied to INCART database). Each dataset was randomly split into train and validation parts with 3:1 ratio. During each experiment, we used the train splits of 5 databases for training and utilized the validation splits of the training databases for early stopping. The hold-out 6-th database was used for inference and testing only.
Table 11 describes the architecture of FE used for the task. Since the provided ECG recordings have different lengths, we used TensorFlow padded batching, which is padding all the recordings in a batch to the length of the longest sequence in the batch. Therefore, input from different batches can have different lengths so the spatial dimensions of the 1D-Convolutional layers are not predefined and are presented as *.
FEATURE EXTRACTOR LAYER TYPE OUTPUT SHAPE
We used the Adam optimizer to optimize weighted binary cross-entropy loss defined as −(wpos · y · log ŷ) + (1− y) · log (1− ŷ). Positive weights wpos are defined per class based on the training split data inversely proportional to the frequency of positive labels for each class. A learning rate was initially set to 0.001 and during the training reduced by the factor of 0.2 if the training loss was not improving for 10 epochs. We also applied early stopping and restored model weights to the best model according to the validation accuracy after the training end. Since each input samples for this experiment have a larger size than the previous one, we decreased the batch size to 64. Each ECG experiment was performed on a single GPU (Nvidia GTX 1080 Ti). The parameters relevant for the DG layer are summarized in Table 12. We have to emphasize that we did not perform extensive hyper-parameter tuning since our goal was to show the feasibility of the I.E.D. assumption and GDUs while keeping the experiments reproducible.
6https://github.com/tomasvicar/BUTTeam [last accessed on 2022-05-17, available under BSD 2-Clause License].
C.4 WILDS BENCHMARKING EXPERIMENTS
For comparison of our approach and benchmarking, we followed the standard procedure of WILDS experiments, described in Koh et al. (2021). As a technical note, all WILDS experiments have been implemented in Pytorch (version >= 1.7.0) based on the specifications made in Koh et al. (2021) and their code published on https://github.com/p-lambda/wilds [last accessed on 2022-05-17, available under MIT License]. The results for the benchmarks were retrieved from the official leaderboard https://wilds.stanford.edu/leaderboard/ [last accessed on 2022-09-26].
Camelyon17 In medical applications, the goal is to apply models trained on a comparatively small set of hospitals to a larger number of hospitals. For this application, we study images of tissue slides under a microscope to determine whether a patient has cancer or not. Shifts in patient populations, slide staining, and image acquisition can impede model accuracy in previously unseen hospitals. Camelyon17 comprises images of tissue patches from five different hospitals. While the first three hospitals are the source domains (302,436 examples), the forth and fifth are the validation (34,904 examples) and test domain (85,054 examples), respectively.
We deviate from the specifications made in (Koh et al., 2021) in terms of the FE. We use the FE from Feng et al. (2020); Peng et al. (2019) since we observed a higher mean accuracy and faster training than with the by Koh et al. (2021) originally proposed DenseNet-121 FE (Huang et al., 2017). We trained the FE from scratch. Both, ERM and the DG were trained over 250 epochs with early stopping, a learning rate of 0.001, which is reduced by a factor of 0.2 if the cross-entropy loss has not improved after 10 epochs. All results were aggregated over ten runs.
FMoW Analyzing satellite images with machine learning (ML) models may enable novel possibilities in tackling global sustainability and economic challenges such as population density mapping and deforestation tracking. However, satellite imagery changes over time due to human behavior (e.g., infrastructure development), and the extent of change is different in each region. The Functional Map of the World (FMoW) dataset consists of satellite images from different continents and years: training (76,863 images; between 2002–2013), validation (19,915 images; between 2013 and 2016), and test (22,108 images, between 2016–2017). The objective is to determine one of 62 building types (e.g., shopping malls) and land-use.
As instructed in Koh et al. (2021), we used the DenseNet-121 pre-trained on ImageNet without L2-regularization. For the optimization, we use the Adam optimizer with a learning rate of 1e-4, which is decayed by a factor of 0.96 per epoch. The models were trained for 50 epochs with early stopping and a batch size of 64. Additionally, we report the worst-region accuracy, which is a specific metric used for FMoW. This worst-region accuracy reports the worst accuracy across the following regions: Asia, Europe, Africa, America, and Oceania (see Koh et al. (2021) for the details). Again, we report the results over three runs.
Amazon Recent research shows that consumer-facing machine learning application large performance disparities across different set of users. To study this performance disparities, WILDS (Koh et al., 2021) leverages a variant of the Amazon Review dataset. The Aamazon-WILDS dataset is composed of data from 3,920 domains (number of reviewers) and the task is a multi-class sentiment classification, where the model receives a review text and has to predict the rating from one to five.
To split this dataset, a between training, validation, and test disjoint set of reviewers is used: training (245,502 reviews from 1,252 reviewers), validation (100,050 reviews from 1,334 reviewers), test (100,050 reviews from 1,334 reviewers).
For the experiments and baseline models | 1. What is the focus and contribution of the paper on domain generalization?
2. What are the strengths of the proposed approach, particularly in its theoretical foundation and versatility?
3. What are the weaknesses of the paper regarding its assumptions and limitations in extrapolating beyond the source domains?
4. Do you have any concerns or suggestions regarding the paper's citations and comparisons with related works?
5. How do you assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper presents a method for domain generalization. The core assumption is that every domain can be decomposed to some elementary domain (something like a base domains) which are invariant. Every domain is a linear combination of these base domains. With this assumption, we can have elementary domain prediction functions and we just need to figure out how to linearly combine their predictions. For a given input example, this is done by measuring the similarity of the input example with each base domain.
Strengths And Weaknesses
The idea is theoretically interesting and it certainly is versatile within the convex hull of the base domains. It relates to some of the existing work on recycling neural networks. I suggest the authors to cite those papers and if possible compare with those baselines.
Some questions for the authors:
in figure 7 of the supplementary material, how come adding elementary domains does not improve results?
The base domains are based on the domains observed from source data. Therefore we can only assume the generalizability of this approach extends to test domains which can be considered an interpolation of source domains. How can the generalizability of the approach extrapolate outside the convex hull of the source domains?
Clarity, Quality, Novelty And Reproducibility
The paper is easy to read Ablation studies are good. The extensive supplementary material and the model diagrams help reproducibility. |
ICLR | Title
Gated Domain Units for Multi-source Domain Generalization
Abstract
Distribution shift (DS) is a common problem that deteriorates the performance of learning machines. To tackle this problem, we postulate that real-world distributions are composed of elementary distributions that remain invariant across different environments. We call this an invariant elementary distribution (I.E.D.) assumption. The I.E.D. assumption implies an invariant structure in the solution space that enables knowledge transfer to unseen domains. To exploit this property in domain generalization (DG), we developed a modular neural network layer that consists of Gated Domain Units (GDUs). Each GDU learns an embedding of an individual elementary distribution that allows us to encode the domain similarities during the training. During inference, the GDUs compute similarities between an observation and each of the corresponding elementary distributions which are then used to form a weighted ensemble of learning machines. Because our layer is trained with backpropagation, it can naturally be integrated into existing deep learning frameworks. Our evaluation on image, text, graph, and time-series data shows a significant improvement in the performance on out-of-training target domains without domain information and any access to data from the target domains. This finding supports the practicality of the I.E.D. assumption and demonstrates that our GDUs can learn to represent these elementary distributions.
1 INTRODUCTION
A fundamental assumption in machine learning is that training and test data are independently and identically distributed (I.I.D.). This assumption ensures consistency-results from statistical learning theory, meaning that the learning machine obtained from an empirical risk minimization (ERM) attains the lowest achievable risk as sample size grows (Vapnik, 1998; Schölkopf, 2019). Unfortunately, a considerable amount of research and real-world applications in the past decades has provided a staggering evidence against this assumption (Zhao et al., 2018; 2020; Ren et al., 2019; Taori et al., 2020) (see D’Amour et al. (2020) for case studies). The violation of the I.I.D. assumption is usually caused by a distribution shift (DS) and can result in inconsistent learning machines (Sugiyama & Kawanabe, 2012), implying the loss of performance guarantee of machine learning models in the real world. Therefore, to tackle DS, recent work advocates for domain generalization (DG) (Blanchard et al., 2011; Muandet et al., 2013; Li et al., 2017; 2018b; Zhou et al., 2021a). This generalization to utterly unseen domains is crucial for robust deployment of the models in practice, especially when new, unforeseeable domains emerge after model deployment. However, the most important question that DG seeks to answer is how to identify the right invariance that allows for generalization.
The contribution of this work is twofold. First, we advocate that real-world distributions are composed of smaller “units” called invariant elementary distributions that remain invariant across different domains; see Section 2.1. Second, we propose to implement this hypothesis through so-called gated domain units (GDUs). Specifically, we developed a modular neural network layer that consists of GDUs. Each GDU learns an embedding of an individual elementary domain that allows us to express the domain similarities during training. For this purpose, we adopt the theoretical framework of reproducing kernel Hilbert space (RKHS) to retrieve a geometrical representation of each distribution in the form of a kernel mean embedding (KME) without information loss (Berlinet & Thomas-Agnan, 2004; Smola et al., 2007; Sriperumbudur et al., 2010; Muandet et al., 2017). This representation accommodates methods based on analytical geometry to measure similarities between distributions.
We show that these similarity measures can be learned and utilized to improve the generalization capability of deep learning models to previously unseen domains.
The remainder of this paper is organized as follows: Our theoretical framework is laid out in Section 2 with our modular DG layer implementation shown in Section 3. In Section 4, we outline related work. Our experimental evaluations are presented in Section 5. Finally, we discuss potential limitations of our approach and future work in Section 6.
2 DOMAIN GENERALIZATION WITH INVARIANT ELEMENTARY DISTRIBUTIONS
We assume a mixture component shift for the multi-source DG setting. This shift refers to the most common DS stating that the data is made up of different sources, each with its own characteristics, and their proportions vary between the training and test scenario (Quinonero-Candela et al., 2022). Our work thus differs in the assumption from related work in DG, in which the central assumption is the covariate shift (i.e., the conditional distribution of the source and test data stays the same) (David et al., 2010). In the following, let X and Y be the input and output space, with a joint distribution P. We are given a set of D labeled source datasets {Dsi }Di=1 with Dsi ⊆ X × Y . Each of the source datasets is assumed to be I.I.D. generated by a joint distribution Psi with support on X ×Y , henceforth denoted domain. The set of probability measures with support on X × Y is denoted by P . The multi-source dataset Ds comprises the merged individual source datasets {Dsj}Dj=1. We aim to minimize the empirical risk, see Section 3.3 for details. Important notation is summarized in Table 1.
2.1 INVARIANT ELEMENTARY DISTRIBUTIONS
Similar to Mansour et al. (2009; 2012); Hoffman et al. (2018a), we assume that the distribution of the source dataset can be described as a convex combination Ps = ∑D j=1 α s jPsj where αs = (αs1, . . . , α s D) is an element of the probability simplex, i.e.,
αs ∈ ∆D := {α ∈ RD |αj ≥ 0 ∧ ∑D
j=1 αj = 1}. In other words, αj quantifies the contribution of each individual source domain to the combined source domain.
In contrast, we generalize their problem descriptions: We express the distribution of each domain as a convex combination of K elementary distributions {Pj}Kj=1 ⊂ P , meaning that Ps = ∑K j=1 αjPj where α ∈ ∆K . Our main assumption is that these elementary distributions remain invariant across the domains. The advantage is that we can find an invariant subspace at a more elementary level, as opposed to when we consider the source domains as some sort of basis for all unseen domain. Figure 1 illustrates this idea.
Theoretically speaking, the I.E.D assumption is appealing because it implies the invariant structure in the solution space, as shown in the following lemma. The proof is given in Appendix A.1. Lemma 1. Let L : Y × Y → R+ be a non-negative loss function, F a hypothesis space of functions f : X → Y , and Ps(X,Y ) a data distribution. Suppose that the I.E.D assumption holds, i.e., there exist K elementary distributions P1, . . . ,PK such that any data distribution can be expressed as Ps(X,Y ) = ∑K j=1 αjPj(X,Y ) for some α ∈ ∆K . Then, the corresponding Bayes predictor f∗ ∈ argminf∈F E(X,Y )∼P[L(Y, f(X))] is Pareto-optimal with respect to a vector of elementary risk functionals (R1, . . . , RK) : F → RK+ where Rj(f) := E(X,Y )∼Pj [L(Y, f(X))].
Lemma 1 implies that, under the I.E.D assumption, Bayes predictors must belong to a subspace of F called the Pareto set FPareto ⊂ F which consists of Pareto-optimal models. The model f is said to be Pareto-optimal if there exists no g ∈ F such that Rj(g) ≥ Rj(f) for all j ∈ {1, . . . ,K} with Rj(g) > Rj(f) for some j; see, e.g., Sener & Koltun (2018, Definition 1). In other words, the I.E.D assumption allows us to translate the invariance property of data distributions to the solution space. Since Bayes predictors of all future test domains must lie within the Pareto set, which is a
strict subset of the original hypothesis space, it is still possible to identify the optimal predictors of future test domains, even without additional data from the test domains, except the I.E.D. assumption itself. Hence, given data from the training domains, it is sufficient for the purpose of generalization to maintain only solutions within this Pareto set during the training time.
Unfortunately, neither the elementary distributions nor the weights α are known in practice. Motivated by this theoretical insight, our DG layer presented in Section 3 is designed to uncover them from a multi-source training dataset Ds. While Lemma 1 shows the theoretical appeal of the I.E.D. assumption, we discuss below a situation in which it might hold in practice. The limitations will be discussed later in Section 6.
Real-world example. In this work, we postulate that the elementary domain bases are the invariant subspaces that allow us to generalize to unseen domains. In practice, the question arises if and when elementary domains evolve. Consider that we aim to learn to predict the risk of developing Diabetes from laboratory data from Europe and then infer the risk from data from the United States of America. Naturally, factors influencing the data-generating process may change, such as the level of physical activity and nutritional habits. While, to a certain degree, these common factors remain invariant across continents, each of these factors’ contributions may differ. In terms of our assumptions, we model each of these factors with a corresponding elementary distribution Pj . For a previously unseen individual, we can then determine the coefficients αsj and quantify each factor’s contribution without any information about the individual’s origin.
2.2 KERNEL MEAN EMBEDDING OF DISTRIBUTIONS
We leverage the KME of distributions (Berlinet & Thomas-Agnan, 2004; Smola et al., 2007; Muandet et al., 2017) to discover the elementary distributions and evaluate similarities between them. Let H be a reproducing kernel Hilbert space (RKHS) of real-valued functions on X with a reproducing kernel k : X × X → R (Schölkopf et al., 2001). The KME of a probability measure P ∈ P in the RKHS H is defined by a mapping ϕ(P) = µP := ∫ X k(x, ·) dP(x). We assume that the kernel k is characteristic, i.e., the mapping µP is injective (Fukumizu et al., 2004; Sriperumbudur et al., 2008). Theoretically, this essential assumption ensures that there is no information loss when mapping the distribution into H. Given the samples {x1, . . . , xn} generated I.I.D. from P, µP can be approximated by the empirical KME µ̂P = (1/n) ∑n i=1 k(xi, ·) = (1/n) ∑n i=1 ϕ(xi). We refer non-expert readers to Muandet et al. (2017) for a thorough review on this topic.
Challenges. Figure 1 depicts two challenges that come with our assumption of elementary distributions. First, since we do not have access to the samples from the hidden elementary distributions, the elementary KME cannot be estimated directly from the samples at hand. To overcome this challenge, we instead seek a proxy KME µVj := (1/N) ∑N k=1 ϕ(v j k) = (1/N) ∑N k=1 k(v j k, ·) for each elementary KME µPj from a domain basis Vj , where Vj = {v j 1, . . . , v j N} ⊆ X for all j ∈ {1, . . . ,M}. Hence, the KME µVj can be interpreted as the KME of the empirical probability measure P̂Vj = (1/N) ∑N k=1 δvjk
. Here, we assume that M = K. The sets Vj are referred to as elementary domain basis. Intuitively, the elementary domain basis V1, . . . , VM represents each elementary distribution by a set of vectors that mimic samples generated from the corresponding distribution. In Figure 1, V1 and V2 as well as their mapping in H visualize this first challenge. The second challenge is the objective of learning the unknown similarity between a single sample xi and an elementary domain Vj , which we denote by βij . Considering the advantage of KMEs, that is to tackle this challenge from a geometrical viewpoint, we quantify similarities between KMEs. For example, in Figure 1, the similarity between ϕ(xi) and µV1 (βi1) and µV2 (βi2) could be quantified as their distance or angle. These similarity coefficients enable our Domain Generalization Layer to represent a convex combination of elementary domain-specific learning machines, commonly known as ensembles. We introduce our layer in the following Section 3.
3 DOMAIN GENERALIZATION LAYER
This section aims to transfer the theoretical ideas presented in Section 2 into a deep learning framework. For the purpose of implementation, let x ∈ Rh×w denote the input data point and hξ : Rh×w → Re the feature extractor (FE) that maps the input into a low-dimensional representation x̃ ∈ Re. Then the prediction layer gθ : Re → Y infers the label y. To tackle the DG problem, we introduce a layer module called the gated domain unit (GDU). A GDU consists of three main components: (1) a similarity function γ : H×H → R that is the same for all elementary domains, (2) an elementary basis Vj and (3) a learning machine f(x̃, θj) for each elementary domain j ∈ {1, . . . ,M}. The architecture of the layer proposed herein is depicted in Figure 2.
Essentially, the process is as follows: First, the j-th GDU takes x̃i as an input and yields βij as an output. The KME of each domain basis Vj is required in order to apply γ to compute similarity between x̃i and Vj . These KMEs are obtained by ϕ(Vj) := µVj = (1/N) ∑N k=1 ϕ(v j k) =
(1/N) ∑N
k=1 k(v j k, ·). The GDU, therefore, has the task to allocate coefficients βij for each ele-
mentary domain based on a similarity function γ. The function γ outputs the βij = γ(ϕ(x̃i), µVj ) coefficients that in turn represent similarities between the KME of both, the corresponding domain basis Vj and the input x̃i. Theoretically speaking, µVj and the feature mapping ϕ(x̃i) are elements of the associated RKHS H, which allow us to evaluate similarities of non-linear features in a higher dimensional feature space. Each GDU is then connected to a learning machine f(x̃i, θj) that yields an elementary domain-specific inference. The final prediction of the layer is then an ensemble of
these learning machines gθ(x̃i) = ∑M
j=1 βijf(x̃i, θj) where θ = (θ1, . . . , θM ). In Figure 2, we give an overview of how data is processed and information is stored in the GDU.
In summary, GDUs leverage the invariant elementary distribution (I.E.D.) assumption and represent our algorithmic contribution: The elementary domain bases are stored as weights in the layer. Storing information as a weight matrix (i.e., domain memory) allows to learn the elementary domain bases efficiently using backpropagation. Hence, we avoid the dependency on problem-adaptive methods (e.g., domain-adversarial training) and domain information (e.g., domain labels).
3.1 DOMAIN SIMILARITY MEASURES
For the similarity function γ, we consider two similarity measures H(ϕ(x̃), µVj ), namely the cosine similarity (CS) (Kim et al., 2019) and maximum mean discrepancy (MMD) (Borgwardt et al., 2006; Gretton et al., 2012). To ensure that the resulting coefficients βi lie on the probability simplex, we apply the kernel softmax function (Gao et al., 2019) and interpret its output as the similarity between an observation x̃ and an elementary domain basis Vi. We get
βij = γ(ϕ(x̃i), µVj ) = exp
( κH(ϕ(x̃i), µVj ) )∑M k=1 exp ( κH(ϕ(x̃i), µVk)
) , (1) where κ > 0 is a positive softness parameter for the kernel softmax. Geometrically speaking, these similarities correspond to the angle and distance of two KMEs in the RKHS H. The function ϕ maps the observation x̃ and domain basis Vj into H meaning that ϕ(x̃) = µδx̃ = k(x̃, ·) is the KME of a Dirac measure δx̃ and ϕ(Vj) = µVj = (1/N) ∑N k=1 k(v j k, ·).
CS. The CS function H(ϕ(x̃i), µVj ) = ⟨ϕ(x̃i),µVj ⟩H
∥ϕ(x̃i)∥H∥µVj ∥H is used as an angle-based similarity.
MMD. We consider the MMD for calculating a distance-based similarity measure. The distance is then given as ∥ϕ(x̃i) − µVj∥H. Subsequently, the similarity function H is the negative MMD: H(ϕ(x̃i), µVj ) = −∥ϕ(x̃i)−µVj∥H. The intuition behind the negative MMD is to put higher weights on samples that are closer to the KME of an elementary domain basis.
3.2 PROJECTION-BASED GENERALIZATION
For classification tasks, we introduce an alternative approach to infer the βi coefficients that is based on the idea of kernel sparse coding (Gao et al., 2010; 2013). Herein the goal is to find an approximated representation of each feature mapping ϕ(x̃i) using the elements of a dictionary {µVj}Mj=1. This approach allows us to approximate the feature mapping with these elements by ϕ(x̃i) ≈ ∑M j=1 βijµVj . In contrast to the aforementioned approaches, an elementary domain KME µVj does not necessarily represent the KME of an elementary distribution µPj . Therefore, we present another approach that aims to find a set {µVj}Mj=1 that permits µPs to be represented as a linear combination. Since P is assumed to be a convex combination of elementary distributions, we can find a linear combination to represent µPs by the domain KMEs µVj , as long as µPs ∈ HM := span{µVj | j = 1, . . . ,M}. The RKHS HM is a subspace of the actual RKHS H, which allows us to represent elements of H at least approximately in the subspace HM . By keeping the HM large, we gain more representative power. To make HM as large as possible, we have to ensure its spanning elements are linearly independent or, even better, orthogonal. Orthogonal KMEs ensure two desirable properties. First, pairwise orthogonal elements in HM guarantee no redundancy. Second, having orthogonal elements allows us to make use of the orthogonal projection. This projection geometrically yields the best approximation of ϕ(x̃) in HM . In other words, we can achieve the best possible approximation of the feature mapping by using its orthogonal components (see Proposition 3.1). The orthogonal projection is given by
ΠHM : H → HM , ϕ(x̃) 7→ M∑ i=1 ⟨ϕ(x̃), µVj ⟩H ∥µVj∥2H µVj . (2)
Proposition 3.1. For a KME µP of a given mixture distribution P the following holds µP ∈ span{µVj |Vj ,∀j = 1, . . . ,M}, where ⟨µVi , µVj ⟩H = 0,∀i ̸= j (i.e., the KME of the elementary domains basis are pairwise orthogonal). The value of the function ∑M j=1∥µP − βjµVj∥2Hk is minimal if the coefficients are set as β∗j = ⟨µP, µVj ⟩H/∥µVj∥2H.
The Proposition 3.1 can be used to give an approximation of µP by projecting it into HM , i.e., µP ≈ ∑M j=1 βjµVj , where βj = ⟨µP, µVj ⟩H/∥µVj∥2H. This best approximation property is the main advantage of our assumption in Proposition 3.1 (i.e., having orthogonal KME) and thus a potential advantage of projection-based DG. Appendix A.2 provides the proof of Proposition 3.1.
3.3 MODEL TRAINING
For model training, we adapt the domain adaptation (DA) framework from Zhuang et al. (2021). Thus, our learning objective function is formalized as L(g) + λDΩD(∥g∥H). The goal of the training can be described in terms of the two components of this function. Consider a batch of training data {x1, . . . , xb}, where b is the batch size. During training, we minimize the loss function L(g) = 1b ∑b i=1 L(ŷi, yi) = 1 b ∑b i=1 L( ∑M j=1 γ(ϕ(x̃i), µVj )fj(x̃i), yi) for an underlying task and the respective batch size. In addition, our objective is that the model learns to distinguish between different domains. Thus, the regularization ΩD is introduced to control the domain basis. In our case, we require the regularization ΩD to ensure that the KMEs of the elementary domain basis are able to represent the KMEs of the elementary domains. Therefore, we minimize the MMD between the feature mappings ϕ(x̃i) and the associated representation ∑M j=1 βijµVj . Note that βij = γ(ϕ(x̃i), µVj ). Hence, the regularization ΩD = Ω OLS D is defined as Ω OLS D ( ∥g∥H ) = 1 b ∑b i=1 ∥ϕ(x̃i)− ∑M j=1 βijµVj∥2H (see Appendix B.2 for details). The intuition is the objective to represent each feature mapping ϕ(x̃i) by the domain KMEs µVj . Thus, we try to minimize the MMD between the feature map and a combination of µVj . The minimum of the stated regularization can be interpreted as the ordinary least square-solution of a regression-problem of ϕ(x̃i) by the components of HM . In other words, we want to ensure that the basis Vj is contained in feature mappings ϕ(x̃i). In the particular case of projection, we want the KME of the elementary domain to be orthogonal to ensure high expressive power. For this purpose, the additional term Ω⊥D will be introduced to ensure the desired orthogonality. Considering a kernel function with k(x, x) = 1, orthogonality would require the Gram matrix Kij = ⟨µVi , µVj ⟩H to be close to the identity matrix I . There are a variety of methods for regularizing matrices available (Xie et al., 2017; Bansal et al., 2018). A well-known method to ensure orthogonality is the soft orthogonality (SO) regularization Ω⊥D = λ∥K − I∥2F (Bansal et al., 2018). As pointed out by Bansal et al. (2018), the spectral restricted isometry property (SRIP) and mutual coherence (MC) regularization can be a promising alternative for SO and thus are additionally implemented in the DG layer. Hence, in the case of projection, the regularization is given by ΩD ( ∥g∥H ) = λOLSΩ OLS D ( ∥g∥H ) + λORTHΩ ⊥ D ( ∥g∥H ) , λOLS , λORTH ≥ 0.
Lastly, sparse coding is an efficient technique to find the least possible basis to recover the data subject to a reconstruction error (Olshausen & Field, 1997). Several such applications yield strong performances, for example in the field of computer vision (Lee et al., 2007; Yang et al., 2009). Kernel sparse coding transfers the reconstruction problem of sparse coding into H by using the mapping ϕ, and, by applying a kernel function, the reconstruction error is quantified as the inner product (Gao et al., 2010; 2013). To ensure sparsity, we apply the L1-norm on the coefficients β and add ΩL1D (∥γ∥) := ∥γ(ϕ(x̃i), µVj )∥1 to the regularization term ΩD with the corresponding coefficient λL1 . Appendix B.3 gives a visual overview of the model training.
4 RELATED WORK
DG, also known as out-of-distribution (OOD) generalization, is among the hardest problems in machine learning (Blanchard et al., 2011; Muandet et al., 2013; Arjovsky et al., 2019). In contrast, DA, which predates DG and OOD problems, deals with a slightly simpler scenario in which some data from the test distribution are available (Ganin et al., 2015). Hence, based on the available data, the task is to develop learning machines that transfer knowledge learned in a source domain specifically to the target domain. Approaches pursued in DA can be grouped primarily into (1) discrepancy-based
DA (Sun et al., 2016; Peng & Saenko, 2018; Ben-David et al., 2010; Fang et al., 2020; Tzeng et al., 2014; Long et al., 2015; Baktashmotlagh et al., 2016) (2) adversary-based DA (Tzeng et al., 2017; Liu & Tuzel, 2016; Ganin et al., 2015; Long et al., 2018), and (3) reconstruction-based DA (Bousmalis et al., 2016; Hoffman et al., 2018b; Kim et al., 2017; Yi et al., 2017; Zhu et al., 2017; Ghifary et al., 2014). In DA, learning the domain-invariant components requires access to unlabeled data from the target domain. Unlike problems in DA, where the observed data from the test domains can be used to find the most appropriate invariant structures (Ben-David et al., 2010), the lack thereof in DG calls for a postulation of invariant structure that will enable the OOD generalization.
To enable generalization to unseen domains without any access to data from them, researchers have made significant progress in the past decade and developed a broad spectrum of methodologies (Zhou et al., 2021a;c; Li et al., 2019; Blanchard et al., 2011). For thorough review see, e.g., Zhou et al. (2021a); Wang et al. (2021). Existing works can be categorized into methods based on domaininvariant representation learning (Muandet et al., 2013; Li et al., 2018b;d), meta-learning (Li et al., 2018a; Balaji et al., 2018), data augmentation (Zhou et al., 2020), to name a few. Another recent stream of research from a causal perspective includes invariant risk minimization (Arjovsky et al., 2019), invariant causal prediction (Peters et al., 2016), and causal representation learning (Schölkopf et al., 2021). The overall motivation here is to learn the representation that is robust to domain-specific spurious correlations. In other words, it is postulated that “causal” features are the right kind of invariance that will enable OOD generalization. Despite the successful applications, DG remains a challenging research gap.
We differentiate our work from existing ones as follow. First, we postulate the existence of domaininvariant structure at the distributional level rather than at the data representation, which is a common assumption in DG. This is motivated by theoretical results (Mansour et al., 2009; Hoffman et al., 2018a), stating that a distribution-weighted combination of source hypotheses represents the ideal hypothesis. Furthermore, our distributional assumption, as we argued in Section 2, generalizes previous work that proposes to use domain-specific knowledge to tackle the problem of DG from a more elementary setting. For example, approaches such as Piratla et al. (2020); Monteiro et al. (2021) can be compared to our GDUs as domain-specific predictors, in the special case, where each elementary domain represents a single source domain. However, GDUs do not assume the existence of a single common classifier for all the domains, providing a combination of multiple common classifiers shared between different source domains.
Second, we incorporate the I.E.D. assumption directly into our model’s architecture, as shown in Figure 2. Designing effective architectures for DG has been largely neglected (Zhou et al., 2020, Sec. 4.1). Last, we do not assume access to domain information. Although obtaining such information can be difficult in practice, see our short discussion in Appendix C.4 (Niu et al., 2017), DG methods that can deal with their absence (e.g., Huang et al. (2020); Carlucci et al. (2019); Li et al. (2018c)) are yet scarce (Zhou et al., 2020, Sec. 4.2).
5 EXPERIMENTS
Since ERM is one of the strongest baselines in DG (Gulrajani & Lopez-Paz, 2020; Koh et al., 2021), we, first, compare our approach compared to ERM and ensemble learning (Table 2 and Appendix C.1). Second, we benchmark our approach to state-of-the-art DG (e.g., CORAL, LISA, IRM, FISH, Group DRO) methods focusing on image, graph, and text data (Table 3 and Appendix C.4). Third, we analyse the GDUs robustness gainst DS that occurs in daily clinical practice (Table 12 and Appendix C.3). Finally, in Appendix C.2, we conduct an ablation study focusing on the representation learned during training (Appendix C.2.2). In our experiments, we distinguish two modes of training the DG layer: fine tuning (FT), where we extract features using a pre-trained model, and end-to-end training (E2E), where the FE and the DG layer are jointly trained1.
5.1 PROOF-OF-CONCEPT BASED ON DIGITS CLASSIFICATION
Following Feng et al. (2020) among others, we create a multi-source dataset by combining five publicly available digits image datasets, namely MNIST (Lecun et al., 1998), MNIST-M (Ganin & Lempitsky, 2015), SVHN (Netzer et al., 2011), USPS, and Synthetic Digits (SYN) (Ganin &
1All source code is made available on GitHub.
Lempitsky, 2015). The task is to classify digits between zero and nine. Each of these datasets is considered an out-of-training target domain which is inaccessible during training, and the remaining four are the source domains. Details are given in Appendix C.1. Table 2 summarizes the results for the most challenging out-of-training target domain, namely MNIST-M. In Appendix C.1, we provide the results on the remaining target domains in Table 7 and a discussion heuristics for choosing hyperparameters for our GDUs. Our method noticeably improves for all datasets mean accuracy and decreases the standard deviation in comparison to the ERM and ensemble baselines, making the results more stable across the ten iterations reported.
Table 2: Results Digits experiment. The mean (standard deviation) accuracy for ten runs is reported. Best results are bold.
MNIST-M
ERM Single 63.00 (3.20)
Ensemble 62.87 (1.50)
CS 68.55 (0.80)
FT MMD 68.62 (0.70)
PROJECTION 68.56 (0.91)
CS 69.25 (0.61)
E2E MMD 69.04 (0.83)
PROJECTION 68.67 (0.98)
We also compare our methods with related work that uses domain information and data augmentation, based on the results of Li et al. (2021) (Table 9 in Appendix C.1). Although data augmentation in DG is a comparatively strong approach and, at the same time, we do not use domain information; we obtain comparable results to the baselines reported by Li et al. (2021).
Ablation study We chose the digits dataset to analyze each component of our DG layer in 1st paragraph in Appendix C.1 and C.2. We (A) vary M , N on Figure 9, and the strength of the regularization terms on Figure 6, Figure 7, and Figure 8 to assess the sensitivity of the DG layer to the choice of hyperparameters, (B) visualize the output of the FE (Figure 11). Our ablation study in (A) reveals stable results across different sets of hyper-parameters. While the layer is not sensitive to the choice of regularization strength, we recommend not to omit the regularization completely, although the computational expenses decrease without the orthogonal regularization. As an illustration in (B), we project the output of the FE trained with a dense layer (ERM) and with the DG layer by t-SNE (t-distributed stochastic neighbor embedding). The GDU-trained FE yields more concentrated and bounded clusters in comparison to the one trained by ERM. Hence, we observe a positive effect on the representation learned by the FE.
5.2 WILDS BENCHMARK
To challenge the I.E.D. assumption and the OOD generalization capabilities of the GDUs, we use WILDS, a curated set of real-world experiments for benchmarking DG methods (Koh et al., 2021). Further, WILDS is a semi-synthetic dataset set that operates under similiar assumptions as the source component shift (Koh et al., 2021). We consider the following eight datasets: Camelyon17, FMoW, FMoW, Amazon, iWildCam, and RxRx1, OGB-MolPCBA, Civil-Comments, and PovertyMap, which represent the task of real-world DG. We closely follow Koh et al. (2021) for the experiments. Details on datasets and benchmark methods are given in Appendix C.4. We present our benchmarking in Table 3. Our results are achieved out-of-the-box (i.e., default parameters) since hyperparameter optimization has a substantial impact on the generalization performance (Gulrajani & Lopez-Paz, 2020), and we aim to highlight the improvements solely attributable to our GDUs.
First, we observe the strengths and weaknesses of the benchmarks in the different data sets, all of which are lower than ERM at least once. In contrast, although GDUs show similar behavior across the datasets, performing very well for some datasets (e.g., FMoW, Poverty Map), they, however, do not fall below ERM across all GDU experiments conducted. In addition, the baselines require domain information. Our approach requires less information, yet, achieving comparable results to the benchmarks.
5.3 ECG EXPERIMENT
The PhysioNet/Computing in Cardiology Challenge 2020 (Perez Alday et al., 2021; Goldberger et al., 2000; Perez Alday et al., 2020) aims to identify clinical diagnoses from 12-lead ECG recordings from 6 different databases. This publicly available pooled dataset contains 43,101 recordings sampled with various sampling frequencies and lengths. Each recording is labeled as having one or more of 24 cardiac abnormalities; hence, the task is to perform a multi-label binary classification. For our experiment, we iterate over the databases, taking one at a time as the test domain while utilizing
the remaining five databases for training. The performance was measured according to the original PhysioNet challenge score. This generalized intersection-over-union score assigns partial credit to misdiagnoses that result in similar treatments or outcomes. The score is then adjusted for a solution that always selects the normal/majority class and normalized for the perfect solution. Therefore, the score can have negative values and a best possible score of 1.
Table 4 reports results for the ECG experiments (see Appendix C.3 for details). For this clinical time-series data, we observe an improvement in mean score and a reduction in standard deviation over the ERM and ERM ensemble baselines across all DG tasks. We attribute poorer performance for the PTB dataset to the fact that it contains considerably longer recordings than other datasets (except for INCART which, however, contains only 75 samples) and a higher sampling rate (1000Hz vs. 500Hz and 257Hz). The negative challenge score for the PTB-XL dataset is due to the presence of previously unobserved labels in other datasets as well as a considerably smaller amount of data for training since the PTB-XL dataset comprises the majority of all samples (21,837 out of 43,101).
CS 0.1830 (0.0061) 0.2950 (0.0035) 0.1595 (0.0313) -8.8802 (0.1069) -0.1932 (0.0168) 0.1853 (0.0036)
FT MMD 0.1877 (0.0077) 0.3011 (0.0035) 0.2100 (0.0413) -8.8082 (0.1458) -0.1567 (0.0211) 0.1919 (0.0036)
6 CONCLUSION AND DISCUSSIONS
We introduced the I.E.D. assumption, postulating that real-world distributions are composed of elementary distributions that remain invariant across different domains and showed that it implies an invariant structure in the solution space that enables knowledge transfer to unseen domains. Empirical results based on real-world data support the practicality of the I.E.D. assumption and that we can learn such a representation. Further, we presented a modular neural network layer consisting of Gated
Domain Units (GDUs) that leverage the I.E.D. assumption. Our GDUs can substantially improve the downstream performance of learning machines in real-world DG tasks. Across our experiments, we observed that for some datasets FT is better than E2E and vice versa. In E2E training, the feature extractor (encoder) is jointly trained with GDUs. Hence, the latent representation is stochastic during training, meaning that we have variability in the representation fed into GDUs between epochs. In contrast, in FT, the feature extractor is pretrained and always produces the same embedding. Especially with large feature extractors such as ResNet-50, learning the elementary domains can be more effective when we avoid any stochasticity in the latent representation.
Limitations. A major limitation of our I.E.D. assumption is to provide theoretical evidence that this assumption holds in practice. We aim to expand the scope of the theoretical understanding of the I.E.D. assumption and the GDUs. In addition, the particular theoretical setting of Albuquerque et al. (2019) (i.e., each elementary domain represents a source domain) seems promising to extend their generalization guarantee to cases where our I.E.D. assumption holds. Second, our GDU layer induces additional computational overhead due to the regularization and model size that increases as a function of the number of elementary domains. Noteworthy, our improvement is achieved with a relatively small number of elementary domains indicating that the increased complexity is not a coercive consequence of applying the DG layer. Also, the results achieved are not a consequence of increased complexity, as the ensemble baseline shows.
Future work We expect the I.E.D. assumption and GDUs to be adapted, yielding novel applications that tackle DG. For example, we suggest dynamically increasing the number of elementary domains during learning until their distributional variance reaches a plateau as a measure of their heterogeneity. Hence, one would learn the number of elementary domains instead of fixing the number of elementary domains prior to training.
Appendices
Table of Contents
A Proofs 19
A.1 Proof of Lemma 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 A.2 Proof of Proposition 3.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
B Details on the Gated Domain Units 20
B.1 Real-world example: Visualizations . . . . . . . . . . . . . . . . . . . . . . . . 20 B.2 Detailed View of the Regularization Term ΩOLSD . . . . . . . . . . . . . . . . . 20 B.3 Visualization of DG Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
C Experiments 22
C.1 Digits Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 C.2 Ablation Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 C.3 ECG Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 C.4 WILDS Benchmarking Experiments . . . . . . . . . . . . . . . . . . . . . . . 31
A PROOFS
A.1 PROOF OF LEMMA 1
Proof. The result holds trivially for K = 1. For K ≥ 2 and by the I.E.D assumption, Ps(X,Y ) = ∑K j=1 αjPj(X,Y ) for some α ∈ ∆K . Then, we can write the risk functional
for each f ∈ F as R(f) = ∫ L(y, f(x)) dPs(x, y) = ∫ L(y, f(x)) d( ∑K j=1 αjPj(x, y)) =∑K
j=1 αj ∫ L(y, f(x)) dPj(x, y) = ∑K j=1 αjRj(f) where Rj : F → R+ is the elementary risk functional associated with the elementary distribution Pj(X,Y ). Hence, the Bayes predictors satisfy
f∗ ∈ argmin f∈F R(f) = argmin f∈F K∑ j=1 αjRj(f). (A.3)
Since the rhs of equation A.3 corresponds to the linear scalarization of a multi-objective function (R1, . . . , RK), its solution (i.e., a stationary point) is Pareto-optimal with respect to these objective functions (Ma et al., 2020, Definition 3.1); see, also, (Hillermeier, 2001a;b). That is, the Bayes predictors for the data distribution that satisfies the I.E.D assumption must belong to the Pareto set FPareto := {f∗ : f∗ = argminf∈F ∑K j=1 αjRj(f), α ∈ ∆K} ⊂ F .
A.2 PROOF OF PROPOSITION 3.1
Proof. Suppose we have a representation,
µP = M∑ j=1 βjµVj ⟨µVi , µVi⟩H = 0∀i ̸= j, (A.1)
i.e. {µV1 , . . . , µVm} are pairwise orthogonal. We want to minimize the MMD by minimizing
∥µP − M∑ j=1
βjµVj∥2H = ⟨µP, µP⟩H︸ ︷︷ ︸ ∥µP∥2H=
−2⟨µP, M∑ j=1 βjµVj ⟩H + ⟨ M∑ i=1 βiµVi , M∑ j=1 βjµVj ⟩H (A.2)
= ∥µP∥2H − 2 M∑ j=1 βj⟨µP, µVj ⟩H + M∑ i=1 M∑ j=1
βiβj ⟨µVi , µVj ⟩H︸ ︷︷ ︸ δij⟨µVi ,µVj ⟩H=
(A.3)
= ∥µP∥2H − 2 M∑ j=1 βj⟨µP, µVj ⟩H + M∑ j=1 β2j ∥µVj∥2H . (A.4)
By defining
Φ(β) := ∥µP − M∑ j=1 βjµVj∥2H , (A.5)
we can simply find the optimal βj by using the partial derivative
∂Φ ∂βj = −2⟨µP, µVj ⟩H + 2βj∥µVj∥2H ! = 0 (A.4)
⇔ βj∥µVj∥2H = ⟨µP, µVj ⟩H (A.5)
⇔ β∗j = ⟨µP, µVj ⟩H ∥µVj∥2H . (A.6)
Please note that the function Φ is convex.
OOD generalization
B DETAILS ON THE GATED DOMAIN UNITS
B.1 REAL-WORLD EXAMPLE: VISUALIZATIONS
As written in Section 2.1, we postulate that the elementary domain bases are the invariant subspaces that allow us to generalize to unseen domains. In practice, the question arises if and when elementary domains evolve. Consider that we aim to learn to predict the risk of developing Diabetes from laboratory data from Europe and then infer the risk from data from the United States of America. Naturally, factors influencing the data-generating process may change, such as the level of physical activity and nutritional habits. While, to a certain degree, these common factors remain invariant across continents, each of these factors’ contributions may differ. In terms of our assumptions, we model each of these factors with a corresponding elementary distribution. Figure 3 depicts our assumption and how it differs from existing works 2.
To exploit this assumption in out-of-distribution (OOD) generalization, we developed a modular neural network layer that consists of so-called Gated Domain Units (GDUs). In Figure 4, we visualized the fundamental concept of the GDUs. Each GDU learns an embedding of an individual elementary domain that allows us to encode the domain similarities during the training. During inference, the GDUs compute similarities between observation and each of the corresponding elementary distributions, which are then used to form a weighted ensemble of learning machines. In other words, for a previously unseen individual, we aim to determine the coefficients and quantify each factor’s contribution without any information about the individual’s origin.
B.2 DETAILED VIEW OF THE REGULARIZATION TERM ΩOLSD
First, consider the following single term ∥ϕ(x̃i)− ∑M j=1 βijµVj∥2H that can be expressed as
∥ϕ(x̃i)− M∑ j=1 βijµVj∥2H = ∥ϕ(x̃i)∥2H︸ ︷︷ ︸ (1) −2 ⟨ϕ(x̃i), M∑ j=1
βijµVj ⟩H︸ ︷︷ ︸ (2)
+ ∥ M∑ j=1
βijµVj∥2H︸ ︷︷ ︸ (3) . (B.1)
AD (1):
We begin with Term (1) and write ∥ϕ(x̃i)∥2H as ∥ϕ(x̃i)∥2H = ⟨ϕ(x̃i), ϕ(x̃i)⟩H = k(x̃i, x̃i). We could evaluate this term using the kernel function k for each data point in the batch b. However, since this
2Of note, Figure 3 is a complete fictive example, and we do not want to make medical implications in any way.
term does not depend on the the elementary domains {V1, . . . , VM}, it is unnecessary to compute this value to minimize the penalty. Thus, we obtain a similar result by minimizing the penalty without considering ∥ϕ(x̃i)∥2H in the regularization.
AD (2):
Term (2) can be expressed as
⟨ϕ(x̃i), M∑ j=1 βijµVj ⟩H = M∑ j=1 βij⟨ϕ(x̃i), µVj ⟩H (B.2)
Implementation-wise, the evaluation of this term requires the calculation of the inner product ⟨ϕ(x̃i), µVj ⟩H. Since our CS and projection-based methods involve this inner product to determine the coefficients βij , we pre-compute the inner product ⟨ϕ(x̃i), µVj ⟩H once for a mini-batch and store these information during training to avoid multiple calculations of the same term.
Moreover, the projection-based method does not apply softmax and has a linear form. Therefore, the term (2) can be simplified even further:
⟨ϕ(x̃i), M∑ j=1 βijµVj ⟩H = M∑ j=1 βij⟨ϕ(x̃i), µVj ⟩H (B.3)
= M∑ j=1 ⟨ϕ(x̃i), µVj ⟩H ∥µVj∥2H ⟨ϕ(x̃i), µVj ⟩H (B.4)
= M∑ j=1 ⟨ϕ(x̃i), µVj ⟩2H ∥µVj∥2H . (B.5)
AD (3):
Last, we express the term (3) as follows
∥ M∑ j=1 βijµVj∥2H = M∑ j=1 M∑ k=1 βijβik⟨µVj , µVk⟩H, (B.6)
and calculate the inner product of the domains ⟨µVj , µVk⟩H by
⟨µVj , µVk⟩H = 1
N2 N∑ l=1 N∑ m=1 ⟨ϕ(vlj), ϕ(vmk )⟩H (B.7)
= 1
N2 N∑ l=1 N∑ m=1 k ( vlj , v m k ) =: Kjk, (B.8)
where N represents the number of vectors per domain basis. Note that this term does not depend on the input data xi and, hence, matrix Kjk can be calculated once at the beginning of the optimization step and stored to be re-used for all the data point of a batch.
Combining Equation B.6 and Equation B.8 yields
∥ M∑ j=1 βijµVj∥2H = M∑ j=1 M∑ k=1 βijβik⟨µVj , µVk⟩H (B.9)
= 1
N2 M∑ j=1 M∑ k=1 βijβik N∑ l=1 N∑ m=1 k ( vlj , v m k ) (B.10)
= M∑ j=1 M∑ k=1 βijβikKjk (B.11)
= βTi Kjkβi . (B.12)
As a final step, we use the results for Term (1), (2), and (3) to obtain the desired regularization term
ΩOLSD = 1
b b∑ i=1 ( ∥ϕ(x̃i)− M∑ j=1 βijµVj∥2H )
(B.13)
= 1
b b∑ i=1 ( ∥ϕ(x̃i)∥2H − 2⟨ϕ(x̃i), M∑ j=1 βijµVj ⟩H + ∥ M∑ j=1 βijµVj∥2H ) . (B.14)
As mentioned above, ∥ϕ(x̃i)∥2H is independent from the elementary domains, and, thus a constant in the regularization. Hence, we can exclude this term, which avoids additional computational effort.
B.3 VISUALIZATION OF DG LAYER
Figure 5 depicts the layout of our DG layer.
C EXPERIMENTS
In this section, we provide a detailed description of the DG experiment presented in Section 5. Our Digits and ECG experiments are implemented using TensorFlow 2.4.1 and TensorFlow Probability 0.12.1. For the WILDS benchmarking we use our PyTorch (version 1.11.0). All source code will be made available on GitHub https://github.com/ (TensorFlow) and https://github. com/ (PyTorch). Overall, our experiments aim to show the validity of the invariant elementary distribution (I.E.D.) assumption and the Gated Domain Units (GDUs).
For the DG layer, we considered two modes of model training: fine tuning (FT) and end-to-end training (E2E). In FT scenario, we first pre-train the FE in the ERM single fashion. Then, we extract features using the pre-trained model and pass them to the DG layer for training the latter. For the E2E training, however, the whole model including the FE and DG layer is trained jointly from the very beginning.
C.1 DIGITS EXPERIMENT
Our experiment setup is closely related to Peng et al. (2019); Feng et al. (2020); Zhang et al. (2020); Zhao et al. (2018). Each dataset, except USPS, is split into training and test sets of 25,000 and 9,000 images, respectively. For USPS, we take the whole dataset for the experiment since it contains only 9,298 images3. Our experimental setup regarding datasets, data loader, and FE are based on existing work (Feng et al., 2020; Peng et al., 2019). The structure of the FE is summarized in Table 5 and the subsequent learning machine is a dense layer.
In the Empirical Risk Minimization (ERM) single experiment, we add a dense layer with 10 outputs (activation=tanh) as a classifier to the FE. In the Empirical Risk Minimization (ERM) ensemble experiment, we add M classification heads (a dense layers with 10 outputs and tanh activation each) to the FE and average their output for the final prediction. This sets a baseline for our DG layer to show performance gain against the ERM model with the same number of learning machines.
For training, we resorted to the Adam optimizer with a learning rate of 0.001. We used early stopping and selected the best model weights according to the validation accuracy. For the validation data, we used the combined test splits only of the respective source datasets. The batch size was set to 512. Although the DG layer requires more computation resources than the ERM models, all digits experiments were conducted on a single GPU (NVIDIA GeForce RTX 3090).
Heuristics for main parameter of DG layer From a practical perspective, our layer requires choosing two main hyper-parameters: the number of elementary domains M and since we use the characteristics Gaussian kernel the corresponding parameter σ. The parameter M determines the size of the ensemble of learning machines and, thus, for deep learning models, their overall network size. As a heuristic to choose M , we suggest to cluster the output of a pre-trained FE. In the following, we provide an example. We pre-trained the FE for the test domain MNIST-M and pass the source data through this FE, which we cluster with the k-means algorithm. Subsequently, we analyse three different metrics (Calinski Harabasz score, Davies Bouldinn score, and Silhouette score) to select the optimal number of clusters as the basis to choose M . All scores yielded an accordance between four to five clusters. Therefore, we set M to five and observed in Table 2 in Section 5 strong results in the generalizing to the unseen test domain MNIST-M.
3We used the digits data from https://github.com/FengHZ/KD3A [last accessed on 2022-05-17, available under MIT License.] published in Feng et al. (2020).
As for the parameter σ, we resort to the median heuristic proposed in (Muandet et al., 2016) that is σ2 = median{ ∥ x̃i − x̃j ∥2 : i, j = 1, . . . , n}. While both heuristics require a pre-trained FE, cross-validation can act as a reasonable alternative. The hyper-parameters relevant for the DG layer are summarized in Table 6. In the FT setting, we applied the median heuristics presented above to estimate σ of the Gaussian kernel function, where the estimator is denoted as σ̂. Since median heuristic is not applicable for the E2E scenario, σ was fixed to 7.5 for E2E.
Note that our approach to choose the relevant parameters was kept very general to show the feasibility of the I.E.D. assumption and the generalization ability of GDUs and, most importantly, to provide easy-to-reproduce results. During training, additional epoch metrics can be subscribed using our custom DG layer callback, which may help to choose the model parameters. Furthermore, we observed that the elementary domains become naturally orthogonal during the experiments, and thus, we set λORTH relatively small. Since the orthogonal regularization puts additional computational burden, one could omit this term completely to speed up training.
Digit-DG Benchmark In previous research, the aforementioned digits data is not only used for domain adaptation (DA), but also for domain generalization (DG) methods. For the latter, Zhou et al. (2021b) and Li et al. (2021) introduced Digit-DG dataset and the evaluation protocol to benchmark seven DG methods and ERM 4. Unlike the Digits experiment described above, Digit-DG dataset from Zhou et al. (2021b) and Li et al. (2021) consists of only four datasets (without USPS) and a different FE summarized in Table 8. Therefore, we follow their instructions to conduct a fair comparison and ensure reproducibility. For the hyper-parameters, however, we kept the same values that we used for the Digits experiment, see Table 6.
4Results were reported by Zhou et al. (2021b) and Li et al. (2021). Of note, both authors did not report the standard deviation on their results.
As a first method, we consider the CCSA (Classification and Contrastive Semantic Alignment) method, which learns a domain-invariant representation by utilizing the CCSA loss (Motiian et al., 2017). Second, MMD-AAE (Maximum Mean Discrepancy-based Adverserial Autoencoders) extends adverserial autoencoders by a maximum mean discrepancy regularization to learn a domain-invariant feature representation (Li et al., 2018b). CrossGrad (Cross-Gradient) augments data by perturbating the input space using the cross-gradients of a label and domain predictor (Shankar et al., 2018). Another augmentation-based DG method is L2A-OT (Learning to Augment by Optimal Transport) (Zhou et al., 2021b). Specifically, a data generator trained to maximize the optimal transport distance between source and pseudo domains, is used to augment the source data. All aforementioned methods rely on the availability of domain information such as domain labels. To benchmark our layer to a method for DG without domain information, we resort to the JiGen (Jigsaw puzzle based Generalization) method (Carlucci et al., 2019). JiGen introduces an auxiliary loss for solving jigsaw task during training. Further, we use the adaptive and non-adaptive stochastic feature augmentation (SFA-S and SFA-A, respectively) method proposed by Li et al. (2021). In principle, both method augment the latent feature embedding of a FE using random noise.
Our results are summarized in Table 9. As noted by Li et al. (2021), it is challenging to outperform augmentation-based DG methods. In addition, SFA-A and SF-S are computationally light (i.e., only adding random noise to the feature embedding) and do not require domain information (Li et al., 2021). Nevertheless, our layer achieves competitive results even against the strongest baselines in all DG tasks without requiring domain information.
C.2 ABLATION STUDY
C.2.1 MAIN COMPONENTS OF THE GATED DOMAIN UNIT
We chose the Digits dataset to conduct an ablation study, which is organized as follows: (1) ablation of the regularization terms presented in Section 3, (2) effect of the orthogonal regularization for projection-based generalization, and (3) affect on the FE’s output.
As a reminder, we introduced the regularization to be dependent on the form of generalization (i.e., domain similarity measures or projection-based generalization in Section 3). For the domain similarity measure case, the regularization is
ΩD ( ∥g∥H ) = λOLSΩ OLS D ( ∥g∥H ) + λL1Ω L1 D (∥γ∥), (C.1)
where λOLS , λL1 ≥ 0. In the case of projection, the regularization is given by ΩD ( ∥g∥H ) = λOLSΩ OLS D ( ∥g∥H ) + λORTHΩ ⊥ D ( ∥g∥H ) (C.2)
with λOLS , λORTH ≥ 0. Although one can additionally choose the sparse regularization in projection-based generalization, we set the focus in the ablation study on the two main regularization terms that are the OLS and orthogonal regularization. For (1) we vary in Equation C.1 and Equation C.2 the corresponding weights λ1 and λ2 in the interval of [0; 0.1] and display the mean classification accuracy for the most challenging classification task of MNSIT-M in the form of a heatmap. In
Figures 6-8, we see that the classification accuracy remains on an overall similar level which indicates that the DG layer is not very sensitive to the hyper-parameter change for MNIST-M as the test domain. Nevertheless, we observe that ablating the regularization terms by setting the corresponding weights to zero decreases the classification results and the peaks in performance occur when the regularization is included during training of the DG layer.
Applying the DG layer comes with additional overhead, especially the regularization that ensures the orthogonality of the elementary domain bases. This additional effort raises a question whether ensuring the theoretical assumptions outweigh the much higher computational effort. Thus, in a second step, we analyze how the orthogonal regularization affects the orthogonality of the elementary domain bases (i.e., spectral restricted isometry property (SRIP) value) and the loss function (i.e., categorical cross-entropy).
In Figure 10, we depict the mean and standard deviation of the SRIP value and loss over five runs for 40 epochs. The SRIP value can be tracked during training with the DG layer’s callback functionalities. First, we observe that the elementary domains are almost orthogonal when initialized. Training the layer leads in the first epochs to a decrease in orthogonality. This initial decrease happens because
cross-entropy has a stronger influence on the optimization than regularization in the first epochs. After five epochs, the cross-entropy decrease to a threshold when the regularization becomes more effective and the orthogonality of the elementary domain bases increases again. In Figure 10, we also observe that ablating the orthogonal regularization, while leading to better orthogonality of the domains, does not significantly affect the overall cross-entropy during training.
Finally, we project the output of the FE trained with a dense layer (ERM) and with the DG layer by t-SNE (t-distributed stochastic neighbor embedding) in Figure 11. The GDU-trained FE yields more concentrated and bounded clusters in comparison to the one trained by ERM. Hence, we observe a positive effect on the representation learned by the FE.
C.2.2 INTERPRETATION OF THE ELEMENTARY DOMAINS
We analyze the learned elementary domains in the digits experiment based on two visualizations, and choose the maximum mean discrepancy (MMD) as the similarity measure and MNIST-M as the test domain. The first visualization depicts the MMD between the datasets (i.e., MNIST, MNIST-M, SVHN, USPS, and Synthetic Digits (SYN)) and the learned elementary domains (i.e., V1 − V5) as a heatmap (see Figure 12 (left)). The heatmap indicates that the source and test domains are close to one another in terms of the MMD. Hence, we expect that their closeness reflects in the learning of the elementary domains. In other words, we expect that each elementary domains contributes similarly to the source and test domains (i.e., the coefficients β are similar for each of these domains). In Section 3.1, we derive the coefficients by applying a kernel softmax function to the negative MMD distances. Since the MMD distances between the source / test domains and the elementary domains are similar, the coefficients will be similar too. We conclude that the learned elementary domains represent the same distributional characteristics that existed among the source and test domains.
In the second visualisation, we show the t-SNE (t-distributed stochastic neighbor embedding) of the feature extractor output for each source and test domain alongside the elementary domains in Figure 12 (right). First, we observe that the learned elementary domain bases form distinctive clusters. We
see these clusters as a validation of our hypothesis that each GDU learns to mimic samples generated from a corresponding elementary distribution as pointed out in Section 2.2. However, we can not answer whether and where these elementary distributions occur in the real world. Moreover, these elementary distributions yet lack interpretability.
In summary, the MMD heatmap and t-SNE embeddings of the learned elementary and source domains on Figure 12 indicate that the GDUs learn to represent distributional structures in the dataset.
C.3 ECG EXPERIMENT
We adopted the task of multi-label binary classification of 12-lead electrocardiogram (ECG) signals combined from 6 different sources introduced in the PhysioNet/Computing in Cardiology Challenge 20205 (Perez Alday et al., 2021; Goldberger et al., 2000; Perez Alday et al., 2020). Each ECG recordings is annotated with 24 binary labels indicating whether or not a certain cardiac abnormality is present. The data is aggregated from 6 different databases and contains 43,101 recordings sampled
5https://physionetchallenges.org/2020/ [last accessed on 2021-03-10, available under Creative Commons Attribution 4.0 International Public License].
with various sampling frequencies, number of subjects, and lengths. Table 10 summarizes most important details about the data sources for this experiment.
According to the original challenge score, we measure the performance in terms of the generalized Intersection-over-Union (IoU) score where partial credit is assigned to misdiagnoses that result in similar treatments or outcomes. The score is defined as
score := yT ·W · ŷ
y ∪ ŷ , (C.3)
where y, ŷ ∈ {0, 1}24 represent actual labels and predicted labels and W stands for the partial credit-assignment matrix provided as a part of the challenge description. Note that in case of identity matrix W the score is exactly the Intersection-over-Union (IoU) score. The score is then adjusted for a solution ymajority, which always predicts the normal/majority class, and is moreover normalized
V0 V1 V2 V3 V4 MNISTMNISTM SVHN SYN USPS
V 0
V 1
V 2
V 3
V 4
M N IS T M N IS T M S V H N
S Y N
U S P S
0
0.14 0
1.4 1.6 0
1.6 1.3 1.7 0
1.7 1.6 0.68 1.6 0
0.34 0.57 0.71 1.3 1.2 0
0.1 0.33 1.1 1.5 1.5 0.18 0
0.13 0.36 1.1 1.5 1.5 0.24 0.038 0
0.19 0.42 0.88 1.4 1.3 0.12 0.052 0.047 0
0.27 0.51 0.77 1.3 1.2 0.029 0.12 0.17 0.079 0
MMD heatmap for: MMD
−100 −75 −50 −25 0 25 50 75 100
−100
−50
0
50
100
150
t-SNE embedding for: MMD
mnist mnistm svhn syn usps V0 V1 V2 V3 V4
V0 V1 V2 V3 V4 MNISTMNISTM SVHN SYN USPS
V 0
V 1
V 2
V 3
V 4
M N IS T M N IS T M S V H N
S Y N
U S P S
0
0.59 0
1.1 1.6 0
1.6 0.96 1.6 0
1.6 1.6 0.8 1.5 0
0.3 0.88 0.52 1.3 1.1 0
0.058 0.74 0.85 1.5 1.4 0.18 0
0.085 0.78 0.86 1.5 1.4 0.24 0.038 0
0.14 0.79 0.65 1.3 1.2 0.12 0.052 0.047 0
0.23 0.85 0.56 1.3 1.1 0.029 0.12 0.17 0.079 0
MMD heatmap for: CS
−100 −75 −50 −25 0 25 50 75 100
−100
−75
−50
−25
0
25
50
75
100 t-SNE embedding for: SIMILARITY
mnist
mnistm svhn syn usps V0 V1 V2 V3 V4
V0 V1 V2 V3 V4 MNISTMNISTM SVHN SYN USPS
V 0
V 1
V 2
V 3
V 4
M N IS T M N IS T M S V H N
S Y N
U S P S
0
1.9 0
0.51 1.2 0
1.8 0.98 1.1 0
0.55 1.2 0.13 1 0
0.63 1.4 0.53 1.4 0.48 0
0.96 1.6 0.84 1.6 0.79 0.18 0
0.97 1.6 0.84 1.7 0.79 0.24 0.038 0
0.73 1.5 0.62 1.5 0.57 0.12 0.052 0.047 0
0.68 1.5 0.58 1.4 0.54 0.029 0.12 0.17 0.079 0
MMD heatmap for: PROJECTED
−60 −40 −20 0 20 40 60 80
−60
−40
−20
0
20
40
t-SNE embedding for: PROJECTED
mnist mnistm svhn syn usps V0 V1 V2 V3 V4
0.0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
1.6
0.0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
1.6
0.00
0.25
0.50
0.75
1.00
1.25
1.50
1.75
MNIST-M
Figure 12: MMD heatmap (left) and t-SNE embedding (right) for the test domain MNIST-M.
29
for the perfect solution y. Therefore, the final score can have negative values and the best possible score of 1 and is formalized as
adjusted score := score(y, ŷ)− score(y, ymajority) score(y, y)− score(y, ymajority) . (C.4)
As a pre-processing step, we down-sampled all the signals to 125 Hz and applied Z-score, random amplification and random stretching according to Vicar et al. (2020). For that we partially adopted the code provided by the authors6. Additionally, we cropped each signal to its first 15,000 points if the signal was too long (mostly applied to INCART database). Each dataset was randomly split into train and validation parts with 3:1 ratio. During each experiment, we used the train splits of 5 databases for training and utilized the validation splits of the training databases for early stopping. The hold-out 6-th database was used for inference and testing only.
Table 11 describes the architecture of FE used for the task. Since the provided ECG recordings have different lengths, we used TensorFlow padded batching, which is padding all the recordings in a batch to the length of the longest sequence in the batch. Therefore, input from different batches can have different lengths so the spatial dimensions of the 1D-Convolutional layers are not predefined and are presented as *.
FEATURE EXTRACTOR LAYER TYPE OUTPUT SHAPE
We used the Adam optimizer to optimize weighted binary cross-entropy loss defined as −(wpos · y · log ŷ) + (1− y) · log (1− ŷ). Positive weights wpos are defined per class based on the training split data inversely proportional to the frequency of positive labels for each class. A learning rate was initially set to 0.001 and during the training reduced by the factor of 0.2 if the training loss was not improving for 10 epochs. We also applied early stopping and restored model weights to the best model according to the validation accuracy after the training end. Since each input samples for this experiment have a larger size than the previous one, we decreased the batch size to 64. Each ECG experiment was performed on a single GPU (Nvidia GTX 1080 Ti). The parameters relevant for the DG layer are summarized in Table 12. We have to emphasize that we did not perform extensive hyper-parameter tuning since our goal was to show the feasibility of the I.E.D. assumption and GDUs while keeping the experiments reproducible.
6https://github.com/tomasvicar/BUTTeam [last accessed on 2022-05-17, available under BSD 2-Clause License].
C.4 WILDS BENCHMARKING EXPERIMENTS
For comparison of our approach and benchmarking, we followed the standard procedure of WILDS experiments, described in Koh et al. (2021). As a technical note, all WILDS experiments have been implemented in Pytorch (version >= 1.7.0) based on the specifications made in Koh et al. (2021) and their code published on https://github.com/p-lambda/wilds [last accessed on 2022-05-17, available under MIT License]. The results for the benchmarks were retrieved from the official leaderboard https://wilds.stanford.edu/leaderboard/ [last accessed on 2022-09-26].
Camelyon17 In medical applications, the goal is to apply models trained on a comparatively small set of hospitals to a larger number of hospitals. For this application, we study images of tissue slides under a microscope to determine whether a patient has cancer or not. Shifts in patient populations, slide staining, and image acquisition can impede model accuracy in previously unseen hospitals. Camelyon17 comprises images of tissue patches from five different hospitals. While the first three hospitals are the source domains (302,436 examples), the forth and fifth are the validation (34,904 examples) and test domain (85,054 examples), respectively.
We deviate from the specifications made in (Koh et al., 2021) in terms of the FE. We use the FE from Feng et al. (2020); Peng et al. (2019) since we observed a higher mean accuracy and faster training than with the by Koh et al. (2021) originally proposed DenseNet-121 FE (Huang et al., 2017). We trained the FE from scratch. Both, ERM and the DG were trained over 250 epochs with early stopping, a learning rate of 0.001, which is reduced by a factor of 0.2 if the cross-entropy loss has not improved after 10 epochs. All results were aggregated over ten runs.
FMoW Analyzing satellite images with machine learning (ML) models may enable novel possibilities in tackling global sustainability and economic challenges such as population density mapping and deforestation tracking. However, satellite imagery changes over time due to human behavior (e.g., infrastructure development), and the extent of change is different in each region. The Functional Map of the World (FMoW) dataset consists of satellite images from different continents and years: training (76,863 images; between 2002–2013), validation (19,915 images; between 2013 and 2016), and test (22,108 images, between 2016–2017). The objective is to determine one of 62 building types (e.g., shopping malls) and land-use.
As instructed in Koh et al. (2021), we used the DenseNet-121 pre-trained on ImageNet without L2-regularization. For the optimization, we use the Adam optimizer with a learning rate of 1e-4, which is decayed by a factor of 0.96 per epoch. The models were trained for 50 epochs with early stopping and a batch size of 64. Additionally, we report the worst-region accuracy, which is a specific metric used for FMoW. This worst-region accuracy reports the worst accuracy across the following regions: Asia, Europe, Africa, America, and Oceania (see Koh et al. (2021) for the details). Again, we report the results over three runs.
Amazon Recent research shows that consumer-facing machine learning application large performance disparities across different set of users. To study this performance disparities, WILDS (Koh et al., 2021) leverages a variant of the Amazon Review dataset. The Aamazon-WILDS dataset is composed of data from 3,920 domains (number of reviewers) and the task is a multi-class sentiment classification, where the model receives a review text and has to predict the rating from one to five.
To split this dataset, a between training, validation, and test disjoint set of reviewers is used: training (245,502 reviews from 1,252 reviewers), validation (100,050 reviews from 1,334 reviewers), test (100,050 reviews from 1,334 reviewers).
For the experiments and baseline models | 1. What is the main contribution of the paper regarding domain generalization?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of theoretical motivation and empirical evaluation?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. What concerns does the reviewer have regarding the theoretical soundness of the I.E.D. assumption and its impact on out-of-distribution generalization?
5. What suggestions does the reviewer provide for improving the manuscript, including experiments to verify the effectiveness of the proposed approach and comparisons with closely related work? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This contribution tackles the domain generalization problem by assuming that every real-world distribution is composed of a mixture of elementary distributions, which remain invariant across different domains. This assumption was named I.E.D. (Invariant Elementary Distribution). The authors presented a lemma to support the theoretical soundness of making such an assumption and then introduced a practical way to leverage the I.E.D. assumption when training neural networks to enable generalization on unseen domains containing the same elementary distributions. The proposed approach roughly consists of an ensemble of predictors weighted by the similarity between an instance and each such elementary distribution. The main contribution of this work lies in computing such similarities with the introduced Gated Domain Unit (GDU), a module to be employed along with neural networks, responsible for encoding each elementary distribution into an embedding that can be trained via backpropagation with the parameters of the ensemble. Empirical evaluation of the proposed approach was carried out on multiple benchmarks comprising, for example, computer vision tasks and time-series classification. Results showed the proposed approach outperforms the selected baselines with respect to accuracy on unseen domains at training time.
Strengths And Weaknesses
Strengths:
The paper tackles a relevant and open problem for the machine learning community;
The proposed approach is theoretically grounded and its algorithmic instantiation seems practical. Also, it does not rely on domain labels;
Empirical evaluation was extensive in the sense that multiple datasets and tasks were taken into account;
Results show that the introduced method presents advantages in comparison to the selected baselines in terms of performance on unseen domains at training time.
Weaknesses:
Lack of soundness in the theoretical motivation, more specifically in Lemma 1 (see the following section of the review for more details);
The manuscript contains multiple claims and statements that are not well-supported / clear (see the following section of the review for more details);
It is not clear, from both theoretical and empirical perspectives, what is the impact on out-of-distribution generalization of selecting a much higher / lower number of elementary distributions;
Experimental evaluation lacks in many aspects:
In the main paper, the proposed approach is only evaluated in terms of performance on unseen domains. This type of evaluation does not explain the actual source of the observed improvements;
The authors did not take into account closely related approaches as baselines in the empirical evaluation.
Clarity, Quality, Novelty And Reproducibility
I found the idea of introducing the I.E.D. assumption (i.e. all domains are a mixture of the same elementary distributions) novel and potentially promising. The algorithmic contributions to leverage the I.E.D. assumption when training deep neural networks for domain generalization problems is interesting and has a solid motivation. However, as I mentioned in the “Strengths and Weaknesses“ section, I have multiple concerns with the current version of the manuscript, especially regarding the soundness of the theoretical motivation for the I.E.D. assumption and the empirical evaluation of the proposed approach. In the following, I detail my concerns and provide suggestions for the authors:
It is not clear to me why Lemma 1 would show the theoretical appeal of the I.E.D. assumption. As far as I understood, Lemma 1 only shows that the Bayes predictor f^* will be Pareto optimal in case the I.E.D. assumption holds, which basically means that f^* can, for example, be such that only of the risks is minimized (i.e. f^* corresponds to a solution in one of the extremes of the Pareto front). I find this actually not appealing from the perspective of domain generalization, since solutions in the extremes of the Pareto front could be enforcing the predictor to yield unfair predictions in cases where, for example, each elementary unit corresponds to a subgroup.
A standard assumption in domain generalization is that of covariate shift [1] (in summary, conditional labeling distribution is the same across all domains). However, it is not clear whether this assumption is required for the use of GDUs to make sense.
I disagree with the claim that obtaining domain information is challenging, especially in the practical cases considered in this manuscript where, for most of the datasets taken into account, domain labels come “for free” in the data collection process (i.e. different domains correspond to data collected from different hospitals).
The manuscript focused too much in comparing the introduced approach with other methods rather than deeply understanding the role of each introduced component. Also, even though the experiments may show that GDUs are helpful for improving out-of-distribution generalization, they don't show whether this is happening due to what is claimed in the manuscript (i.e. the I.E.D. assumption indeed holds in practice and GDUs are in fact capable of learning to model the elementary distributions). Therefore, experiments that verify whether the proposed improvements are responsible for the reported increase in performance should be included in the main paper.
The manuscript lacks discussions about how not satisfying the I.E.D. assumption would affect the predictions of models trained with their proposed approach, as well as to which (unseen) distributions it is possible to expect generalization given that the I.E.D. assumption holds. One way to address that could be to include results that shed light on how much the performance of a trained ensemble containing GDUs would decrease in case a test domain presents no common elementary distribution with respect to the training domains.
It is not clear from the experiments (including the ones reported in the Appendix) what is the impact of the choice of the number of elementary units. Since the authors do not provide any insight from theory, i.e. there are no results that ties out-of-distribution generalization with learning the “correct“ amount of elementary distributions, at least experiments studying this factor should be provided.
The domain generalization setting requires an ability to generalize in- and out-of-distribution. How well do the models reported perform in-distribution?
Selected baselines are not the approaches more closely related to the proposed algorithm. Why exactly were those baselines selected? Why haven’t the authors compared GDUs with the most related work to their approach [2, 3] which was cited in the Related Work section?
Clarity: overall, I found the manuscript well-written, but the points need to be addressed in order to improve its clarity:
The manuscript contains too many acronyms, which should be avoided as they make it difficult to understand some sentences in the text.
Some sentences and terms across the manuscript are unclear:
Page 1, paragraph 2: it is not clear what "smaller unit" means in the context of this work. As far as I understood, a "unit" would be a distribution, but what is a "small" distribution? Please clarify.
Page 2, Section 2.1: “The advantage is that we can find an invariant subspace at a more elementary level”. What exactly does “elementary level” mean in this sentence?
Page 2, paragraph 2: “The question arises if and when elementary domains evolve.” Please clarify what is the meaning of “evolve” here.
[1] David, Shai Ben, et al. "Impossibility theorems for domain adaptation", 2010.
[2] Monteiro, Joao, et al. "Domain Conditional Predictors for Domain Adaptation", 2021.
[3] Piratla, Vihari, et al "Efficient domain generalization via common-specific low-rank decomposition", 2020. |
ICLR | Title
Joint Learning of Full-structure Noise in Hierarchical Bayesian Regression Models
Abstract
We consider hierarchical Bayesian (type-II maximum likelihood) models for observations with latent variables for source and noise, where both hyperparameters need to be estimated jointly from data. This problem has application in many domains in imaging including biomagnetic inverse problems. Crucial factors influencing accuracy of source estimation are not only the noise level but also its correlation structure, but existing approaches have not addressed estimation of noise covariance matrices with full structure. Here, we consider the reconstruction of brain activity from electroencephalography (EEG). This inverse problem can be formulated as a linear regression with independent Gaussian scale mixture priors for both the source and noise components. As a departure from classical sparse Bayesan learning (SBL) models where across-sensor observations are assumed to be independent and identically distributed, we consider Gaussian noise with full covariance structure. Using Riemannian geometry, we derive an efficient algorithm for updating both source and noise covariance along the manifold of positive definite matrices. Using the majorization-maximization framework, we demonstrate that our algorithm has guaranteed and fast convergence. We validate the algorithm both in simulations and with real data. Our results demonstrate that the novel framework significantly improves upon state-of-the-art techniques in the real-world scenario where the noise is indeed non-diagonal and fully-structured.
1 INTRODUCTION
Having precise knowledge of the noise distribution is a fundamental requirement for obtaining accurate solutions in many regression problems (Bungert et al., 2020). In many applications however, it is impossible to separately estimate this noise distribution, as distinct ”noise-only” (baseline) measurements are not feasible. An alternative, therefore, is to design estimators that jointly optimize over the regression coefficients as well as over parameters of the noise distribution. This has been pursued both in a (penalized) maximum-likelihood settings (here referred to as Type-I approaches) (Petersen & Jung, 2020; Bertrand et al., 2019; Massias et al., 2018) as well as in hierarchical Bayesian settings (referred to as Type-II) (Wipf & Rao, 2007; Zhang & Rao, 2011; Hashemi et al., 2020; Cai et al., 2020a). Most contributions in the literature are, however, limited to the estimation of only a diagonal noise covariance (i.e., independent between different measurements) (Daye et al., 2012; Van de Geer et al., 2013; Dalalyan et al., 2013; Lederer & Muller, 2015). Considering a diagonal noise covariance is a limiting assumption in practice as the noise interference in many realistic scenarios are highly correlated across measurements; and thus, have non-trivial off-diagonal elements.
This paper develops an efficient optimization algorithm for jointly estimating the posterior of regression parameters as well as the noise distribution. More specifically, we consider linear regression with Gaussian scale mixture priors on the parameters and a full-structure multivariate Gaussian noise. We cast the problem as a hierarchical Bayesian (type-II maximum-likelihood) regression problem, in which the variance hyperparameters and the noise covariance matrix are optimized by maximizing the Bayesian evidence of the model. Using Riemannian geometry, we derive an efficient algorithm for jointly estimating the source and noise covariances along the manifold of positive definite (P.D.) matrices.
To highlight the benefits of our proposed method in practical scenarios, we consider the problem of electromagnetic brain source imaging (BSI). The goal of BSI is to reconstruct brain activity
from magneto- or electroencephalography (M/EEG), which can be formulated as a sparse Bayesian learning (SBL) problem. Specifically, it can be cast as a linear Bayesian regression model with independent Gaussian scale mixture priors on the parameters and noise. As a departure from the classical SBL approaches, here we specifically consider Gaussian noise with full covariance structure. Prominent source of correlated noise in this context are, for example, eye blinks, heart beats, muscular artifacts and line noise. Other realistic examples for the need for such full-structure noise can be found in the areas of array processing (Li & Nehorai, 2010) or direction of arrival (DOA) estimation (Chen et al., 2008). Algorithms that can accurately estimate noise with full covariance structure are expected to achieve more accurate regression models and predictions in this setting.
2 TYPE-II BAYESIAN REGRESSION
We consider the linear model Y = LX + E, in which a forward or design matrix, L ∈ RM×N , is mapped to the measurements, Y, by a set of coefficients or source components, X. Depending on the setting, the problem of estimating X given L and Y is called an inverse problem in physics, a multitask regression problem in machine learning, or a multiple measurement vector (MMV) recovery problem in signal processing (Cotter et al., 2005). Adopting a signal processing terminology, the measurement matrix Y ∈ RM×T captures the activity of M sensors at T time instants, y(t) ∈ RM×1, t = 1, . . . , T , while the source matrix, X ∈ RN×T , consists of the unknown activity of N sources at the same time instants, x(t) ∈ RN×1, t = 1, . . . , T . The matrix E = [e(1), . . . , e(T )] ∈ RM×T represents T time instances of zero-mean Gaussian noise with full covariance Λ, e(t) ∈ RM×1 ∼ N (0,Λ), t = 1, . . . , T , which is assumed to be independent of the source activations. In this paper, we focus on M/EEG based brain source imaging (BSI) but the proposed algorithm can be used in general regression settings, in particular for sparse signal recovery (Candès et al., 2006; Donoho, 2006) with a wide range of applications (Malioutov et al., 2005). The goal of BSI is to infer the underlying brain activity X from the EEG/MEG measurement Y given a known forward operator, called lead field matrix L. As the number of sensors is typically much smaller than the number of locations of potential brain sources, this inverse problem is highly ill-posed. This problem is addressed by imposing prior distributions on the model parameters and adopting a Bayesian treatment. This can be performed either through Maximum-a-Posteriori (MAP) estimation (Type-I Bayesian learning) (Pascual-Marqui et al., 1994; Gorodnitsky et al., 1995; Haufe et al., 2008; Gramfort et al., 2012; Castaño-Candamil et al., 2015) or, when the model has unknown hyperparameters, through Type-II Maximum-Likelihood estimation (Type-II Bayesian learning) (Mika et al., 2000; Tipping, 2001; Wipf & Nagarajan, 2009; Seeger & Wipf, 2010; Wu et al., 2016).
In this paper, we focus on Type-II Bayesian learning, which assumes a family of prior distributions p(X|Θ) parameterized by a set of hyperparameters Θ. These hyper-parameters can be learned from the data along with the model parameters using a hierarchical Bayesian approach (Tipping, 2001; Wipf & Rao, 2004) through the maximum-likelihood principle:
ΘII := arg max Θ p(Y|Θ) = arg max Θ
∫ p(Y|X,Θ)p(X|Θ)dX . (1)
Here we assume a zero-mean Gaussian prior with full covariance Γ for the underlying source distribution, x(t) ∈ RN×1 ∼ N (0,Γ), t = 1, . . . , T . Just as most other approaches, Type-II Bayesian learning makes the simplifying assumption of statistical independence between time samples. This leads to the following expression for the distribution of the sources and measurements:
p(X|Γ) = T∏ t=1 p(x(t)|Γ) = T∏ t=1 N (0,Γ) (2)
p(Y|X) = T∏ t=1 p(y(t)|x(t)) = T∏ t=1 N (Lx(t),Λ) . (3)
The parameters of the Type-II model, Θ, are the unknown source and noise covariances, i.e., Θ = {Γ,Λ}. The unknown parameters Γ and Λ are optimized based on the current estimates of the source and noise covariances in an alternating iterative process. Given initial estimates of Γ and Λ,
the posterior distribution of the sources is a Gaussian of the form (Sekihara & Nagarajan, 2015)
p(X|Y,Γ) = T∏ t=1 N (µx(t),Σx) ,where (4)
µx(t) = ΓL >(Σy) −1y(t) (5)
Σx = Γ− ΓL>(Σy)−1LΓ (6) Σy = Λ + LΓL
> . (7) The estimated posterior parameters µx(t) and Σx are then in turn used to update Γ and Λ as the minimizers of the negative log of the marginal likelihood p(Y|Γ,Λ), which is given by (Wipf et al., 2010):
LII(Γ,Λ) = − log p(Y|Γ,Λ) = log|Σy|+ 1
T T∑ t=1 y(t)>Σ−1y y(t)
= log|Λ + LΓL>|+ 1 T T∑ t=1 y(t)> ( Λ + LΓL> )−1 y(t) , (8)
where | · | denotes the determinant of a matrix. This process is repeated until convergence. Given the final solution of the hyperparameters ΘII = {ΓII,ΛII}, the posterior source distribution is obtained by plugging these estimates into equations 3 to 6.
3 PROPOSED METHOD: FULL-STRUCTURE NOISE (FUN) LEARNING
Here we propose a novel and efficient algorithm, full-structure noise (FUN) learning, which is able to learn the full covariance structure of the noise jointly within the Bayesian Type-II regression framework. We first formulate the algorithm in its most general form, in which both the noise distribution and the prior have full covariance structure. Later, we make the simplifying assumption of independent source priors, leading to the pruning of the majority of sources. This effect, which has also been referred to as automatic relevance determination (ARD) or sparse Bayesian learning (SBL) is beneficial in our application of interest, namely the reconstruction of parsimonious sets of brain sources underlying experimental EEG measurements.
Note that the Type-II cost function in equation 8 is non-convex and thus non-trivial to optimize. A number of iterative algorithms such as majorization-minimization (MM) (Sun et al., 2017) have been proposed to address this challenge. Following the MM scheme, we first construct convex surrogate functions that majorizes LII(Γ,Λ) in each iteration of the optimization algorithm. Then, we show the minimization equivalence between the constructed majoring functions and equation 8. This result is presented in the following theorem: Theorem 1. Let Λk and Σky be fixed values obtained in the (k)-th iteration of the optimization algorithm minimizing LII(Γ,Λ). Then, optimizing the non-convex type-II ML cost function in equation 8, LII(Γ,Λ), with respect to Γ is equivalent to optimizing the following convex function, which majorizes equation 8:
Lconvsource(Γ,Λk) = tr( ( CkS )−1 Γ) + tr(MkSΓ −1) , (9)
where CkS and M k S are defined as:
CkS := ( L> ( Σky )−1 L )−1 , MkS := 1
T T∑ t=1 xk(t)xk(t)> . (10)
Similarly, optimizing LII(Γ,Λ) with respect to Λ is equivalent to optimizing the following convex majorizing function:
Lconvnoise(Γk,Λ) = tr( ( CkN )−1 Λ) + tr(MkNΛ −1) , (11)
where CkN and M k N are defined as:
CkN := ( Σky ) , MkN := 1
T T∑ t=1 (y(t)− Lxk(t))(y(t)− Lxk(t))> . (12)
Proof. The proof is presented in Appendix A.
We continue by considering the optimization of the cost functionsLconvsource(Γ,Λk) andLconvnoise(Γk,Λ) with respect to Γ and Λ, respectively. Note that in case of source covariances with full structure, the solution of Lconvsource(Γ,Λk) with respect to Γ lies in the (N2 − N)/2 Riemannian manifold of positive definite (P.D.) matrices. This consideration enables us to invoke efficient methods from Riemannian geometry (see Petersen et al., 2006; Berger, 2012; Jost & Jost, 2008), which ensures that the solution at each step of the optimization is contained within the lower-dimensional solution space. Specifically, in order to optimize for the source covariance, the algorithm calculates the geometric mean between the previously obtained statistical model source covariance, CkS, and the source-space sample covariance matrix, MkS, in each iteration. Analogously, to update the noise covariance estimate, the algorithm calculates the geometric mean between the model noise covariance, CkN, and the empirical sensor-space residuals, M k N. The update rules obtained from this algorithm are presented in the following theorem:
Theorem 2. The cost functions Lconvsource(Γ,Λk) and Lconvnoise(Γk,Λ) are both strictly geodesically convex with respect to the P.D. manifold, and their optimal solution with respect to Γ and Λ, respectively, can be attained according to the two following update rules:
Γk+1 ← (CkS) 1 2 ( (CkS) −1/2MkS(C k S) −1/2 ) 1 2 (CkS) 1 2 , (13)
Λk+1 ← (CkN) 1 2 ( (CkN) −1/2MkN(C k N) −1/2 ) 1 2 (CkN) 1 2 . (14)
Proof. A detailed proof can be found in Appendix B.
Convergence of the resulting algorithm is shown in the following theorem.
Theorem 3. Optimizing the non-convex type-II ML cost function in equation 8, LII(Γ,Λ) with alternating update rules for Γ and Λ in equation 13 and equation 14 leads to an MM algorithm with guaranteed convergence guarantees.
Proof. A detailed proof can be found in Appendix C.
While Theorems 1–3 reflect a general joint learning algorithm, the assumption of sources with full covariance structure is often relaxed in practice. The next section will shed light on this important simplification by making a formal connection to SBL algorithms.
3.1 SPARSE BAYESIAN LEARNING WITH FULL NOISE MODELING
In brain source imaging, the assumption of full source covariance is often relaxed. Even if, technically, most parts of the brain are active at all times, and the concurrent activations of different brain regions can never be assumed to be fully uncorrelated, there are many experimental settings in which it is reasonable to assume only a small set of independent brain sources. Such sparse solutions are physiologically plausible in task-based analyses, where only a fraction of the brain’s macroscopic structures is expected to be consistently engaged. A common strategy in this case is to model independent sources through a diagonal covariance matrix. In the Type-II Bayesian learning framework, this simplification interestingly leads to sparsity of the resulting source distributions, as, at the optimum, many of the estimated source variances are zero. This mechanism is known as sparse Bayesian learning and is closely related to the more general concept of automatic relevance determination. Here, we adopt the SBL assumption for the sources, leading to Γ-updates previously described in the BSI literature under the name Champagne (Wipf & Nagarajan, 2009). As a novelty and main focus of this paper, we here equip the SBL framework with the capability to jointly learn full noise covariances through the geometric mean based update rule in equation 14. In the SBL framework, the N modeled brain sources are assumed to follow independent univariate Gaussian distributions with zero mean and distinct unknown variances γn: xn(t) ∼ N (0, γn), n = 1, . . . , N . In the SBL solution, the majority of variances is zero, thus effectively inducing spatial sparsity of the corresponding source activities. For FUN learning, we also impose a diagonal structure on the source covariance matrix, Γ = diag(γ), where γ = [γ1, . . . , γN ]>. By constraining Γ in equation 9
Algorithm 1: Full-structure noise (FUN) learning Input: The lead field matrix L ∈ RM×N and the measurement vectors y(t) ∈ RM×1, t = 1, . . . , T . Result: The estimated prior source variances [γ1, . . . , γN ]>, noise covariance Λ, the posterior
mean µx(t) and covariance Σx of the sources. 1 Set a random initial value for Λ as well as γ = [γ1, . . . , γN ]>, and construct Γ = diag(γ). 2 Calculate the statistical covariance Σy = Λ + LΓL>.
Repeat 3 Calculate the posterior mean as µx(t) = ΓL>(Σy)−1y(t). 4 Calculate CkS and M k S based on equation 10, and update γn for n = 1, . . . , N based on equation 15. 5 Calculate CkN and M k N based on equation 12, and update Λ based on equation 14.
Until stopping condition is satisfied; 6 Calculate the posterior covariance as Σx = Γ− ΓL>(Σy)−1LΓ.
to the set of diagonal matrices, W , we can show that the update rule equation 13 for the source variances simplifies to the following form:
γk+1n ← √√√√√ [ MkS ] n,n[(
CkS )−1]
n,n
= √√√√ 1T ∑Tt=1(xkn(t))2 L>n ( Σky )−1 Ln for n = 1, . . . , N , (15)
where Ln denotes the n-th column of the lead field matrix. Interestingly, equation 15 is identical to the update rule of the Champagne algorithm. A detailed derivation of equation 15 can be found in Appendix D.
Summarizing, the FUN learning approach, just like Champagne and other SBL algorithms, assumes independent Gaussian sources with individual variances (thus, diagonal source covariances), which are updated through equation equation 15. Departing from the classical SBL setting, which assumes the noise distribution to be known, FUN models noise with full covariance structure, which is updated using equation 14. Algorithm 1 summarizes the used update rules.
Note that various recent Type-II noise learning schemes for diagonal noise covariance matrices (Hashemi et al., 2020; Cai et al., 2020a) that are rooted in the concept of SBL can be also derived as special cases of FUN learning assuming diagonal source and noise covariances, i.e., Γ,Λ ∈ W . Specifically imposing diagonal structure on the noise covariance matrix for the FUN algorithm, Λ, results in identical noise variance update rules as derived in Cai et al. (2020a) for heteroscedastic, and in Hashemi et al. (2020) for homoscedastic noise. We explicitly demonstrate this connection in Appendix E. Here, we note that heteroscedasticity refers to the common phenomenon that measurements are contaminated with non-uniform noise levels across channels, while homoscedasticity only accounts for uniform noise levels.
4 NUMERICAL SIMULATIONS AND REAL DATA ANALYSIS
Source, Noise and Forward Model: We simulated a sparse set of N0 = 5 active brain sources that were placed at random positions on the cortex. To simulate the electrical neural activity of these sources, T = 200 identically and independently distributed (i.i.d) points were sampled from a Gaussian distribution, yielding sparse source activation vectors x(t). The resulting source distribution, represented as X = [x(1), . . . ,x(T )], was projected to the EEG sensors through application of lead field matrix as the forward operator: Ysignal = LX. The lead field matrix, L ∈ R58×2004, was generated using the New York Head model (Huang et al., 2016) taking into account the realistic anatomy and electrical tissue conductivities of an average human head. Further details regarding forward modeling is provided in Appendix F. Gaussian additive noise was randomly sampled from a zero-mean normal distribution with full covariance matrix Λ: e(t) ∈ RM×1 ∼ N (0,Λ), t = 1, . . . , T . This setting is further referred to as full-structure noise. Note that we also generated noise with diagonal covariance matrix, referred to as heteroscedastic noise, in order to investigate the effect of model violation on reconstruction performance. The
noise matrix E = [e(1), . . . , e(T )] ∈ RM×T was normalized by it Frobenius norm and added to the signal matrix Ysignal as follows: Y = Ysignal + ( (1−α)‖Ysignal‖ F/α‖E‖F ) E, where α determines the signal-to-noise ratio (SNR) in sensor space. Precisely, SNR is obtained as follows: SNR = 20log10 (α/1−α). In the subsequently described experiments the following values of α were used: α={0.3, 0.35, 0.4, 0.45, 0.5, 0.55, 0.65, 0.7, 0.8}, which correspond to the following SNRs: SNR={-12, -7.4, -5.4, -3.5, -1.7, 0, 1.7, 3.5, 5.4, 7.4, 12} (dB). MATLAB codes for producing the results in the simulation study are uploaded here.
Evaluation Metrics and Simulation Set-up: We applied the full-structure noise learning approach on the synthetic datasets described above to recover the locations and time courses of the active brain sources. In addition to our proposed approach, two further Type-II Bayesian learning schemes, namely Champagne with homo- and heteroscedastic noise learning (Hashemi et al., 2020; Cai et al., 2020a), were also included as benchmarks with respect to source reconstruction performance and noise covariance estimation accuracy. Source reconstruction performance was evaluated according to the earth mover’s distance (EMD) (Rubner et al., 2000)), the error in the reconstruction of the source time courses, the average Euclidean distance (EUCL) (in mm) between each simulated source and the best (in terms of absolute correlations) matching reconstructed source, and finally F1-measure score (Chinchor & Sundheim, 1993). A detailed definition of evaluation metrics is provided in Appendix F. To evaluate the accuracy of the noise covariance matrix estimation, the following two metrics were calculated: the Pearson correlation between the original and reconstructed noise covariance matrices, Λ and Λ̂, denoted by Λsim, and the normalized mean squared error (NMSE) between Λ and Λ̂, defined as NMSE = ||Λ̂ − Λ||2F /||Λ||2F . Note that NMSE measures the reconstruction of the true scale of the noise covariance matrix, while Λsim is scale-invariant and hence only quantifies the overall structural similarity between simulated and estimated noise covariance matrices. Each simulation was carried out 100 times using different instances of X and E, and the mean and standard error of the mean (SEM) of each performance measure across repetitions was calculated. Convergence of the optimization programs for each run was defined if the relative change of the Frobenius-norm of the reconstructed sources between subsequent iterations was less than 10−8. A maximum of 1000 iterations was carried out if no convergence was reached beforehand.
Figure 1 shows two simulated datasets with five active sources in presence of full-structure noise (upper panel) as well as heteroscedastic noise (lower panel) at 0 (dB) SNR. Topographic maps depict the locations of the ground-truth active brain sources (first column) along with the source reconstruction result of three noise learning schemes assuming noise with homoscedastic (second column), heteroscedastic (third column), and full (fourth column) structure. For each algorithm, the estimated noise covariance matrix is also plotted above the topographic map. Source reconstruction performance was measured in terms of EMD and time course correlation (Corr), and is summarized in the table next to each panel. Besides, the accuracy of the noise covariance matrix reconstruction was measured on terms of Λsim and NMSE. Results are included in the same table. Figure 1 (upper panel) allows for a direct comparison of the estimated noise covariance matrices obtained from the three different noise learning schemes. It can be seen that FUN learning can better capture the overall structure of ground truth full-structure noise as evidenced by lower NMSE and similarity errors compared to the heteroscedastic and homoscedastic algorithm variants that are only able to recover a diagonal matrix while enforcing the off-diagonal elements to zero. This behaviour results in higher spatial and temporal accuracy (lower EMD and time course error) for FUN learning compared to competing algorithms assuming diagonal noise covariance. This advantage is also visible in the topographic maps. The lower-panel of Figure 1 presents analogous results for the setting where the noise covariance is generated according to a heteroscedastic model. Note that the superior spatial and temporal reconstruction performance of the heteroscedastic noise learning algorithm compared to the full-structure scheme is expected here because the simulated ground truth noise is indeed heteroscedastic. The full-structure noise learning approach, however, provides fairly reasonable performance in terms of EMD, time course correlation (corr), and Λsim, although it is designed to estimate a full-structure noise covariance matrix. The convergence behaviour of all three noise learning variants is also illustrated in Figure 1. Note that the full-structure noise learning approach eventually reaches lower negative log-likelihood values in both scenarios, namely full-structure and heteroscedastic noise.
Figure 2 shows the EMD, the time course reconstruction error, the EUCL and the F1 measure score incurred by three different noise learning approaches assuming homoscedastic (red), heteroscedastic
(green) and full-structure (blue) noise covariances for a range of 10 SNR values. The upper panel represents the evaluation metrics for the setting where the noise covariance is full-structure model, while the lower-panel depicts the same metric for simulated noise with heteroscedastic diagonal covariance. Concerning the first setting, FUN learning consistently outperforms its homoscedastic and heteroscedastic counterparts according to all evaluation metrics in particular in low-SNR settings. Consequently, as the SNR decreases, the gap between FUN learning and the two other variants increases. Conversely, heteroscedastic noise learning shows an improvement over FUN learning according to all evaluation metrics when the simulated noise is indeed heteroscedastic. However, note that the magnitude of this improvement is not as large as observed for the setting where the noise covariance is generated according to a full-structure model and then is estimated using the FUN approach.
Analysis of Auditory Evoked Fields (AEF): Figure 3 shows the reconstructed sources of the Auditory Evoked Fields (AEF) versus number of trials from a single representative subject using FUN learning algorithm. Further details on this dataset can be found in Appendix G. We tested the reconstruction performance of FUN learning with the number of trials limited to 1, 2, 12, 63 and 120. Each reconstruction was performed 30 times with the specific trials themselves chosen as a random subset of all available trials. As the subplots for different trials demonstrate, FUN learning algorithm is able to correctly localize bilateral auditory activity to Heschel’s gyrus, which is the characteristic location of the primary auditory cortex, under a few trials or even a single trial.
5 DISCUSSION
This paper focused on sparse regression within the hierarchical Bayesian regression framework and its application in EEG/MEG brain source imaging. To this end we developed an algorithm, which is, however, suitable for a much wider range of applications. What is more, the same concepts used here for full-structure noise learning could be employed in other contexts where hyperparameters like kernel widths in Gaussian process regression (Wu et al., 2019) or dictionary elements in the dictionary learning problem (Dikmen & Févotte, 2012) are to be inferred. Besides, using FUN learning algorithm may also prove useful for practical scenarios in which model residuals are expected to be correlated, e.g., probabilistic canonical correlation analysis (CCA) (Bach & Jordan, 2005), spectral independent component analysis (ICA) (Ablin et al., 2020), wireless communication (Prasad et al., 2015; Gerstoft et al., 2016; Haghighatshoar & Caire, 2017; Khalilsarai et al., 2020), robust portfolio optimization in finance (Feng et al., 2016), graph learning (Kumar et al., 2020), thermal field reconstruction (Flinth & Hashemi, 2018), and brain functional imaging (Wei et al., 2020).
Noise learning has also attracted attention in functional magnetic resonance imaging (fMRI) (Cai et al., 2016; Shvartsman et al., 2018; Cai et al., 2019b; 2020b; Wei et al., 2020), where various models like matrix-normal (MN), factor analysis (FA), and Gaussian-process (GP) regression have been proposed. The majority of the noise learning algorithms in the fMRI literature rely on the EM framework, which is quite slow in practice and has convergence guarantees only under certain strong conditions. In contrast to these existing approaches, our proposed framework not only applies to the models considered in these papers, but also benefits from theoretically proven convergence guarantees. To be more specific, we showed in this paper that FUN learning is an instance of the wider class of majorization-minimization (MM) framework, for which provable fast convergence is guaranteed. It is worth emphasizing our contribution within the MM optimization context as well. In many MM implementations, surrogate functions are minimized using an iterative approach. Our proposed algorithm, however, obtains a closed-form solution for the surrogate function in each step, which further advances its efficiency.
In the context of BSI, Engemann & Gramfort (2015) proposed a method for selecting a single regularization parameter based on cross-validation and maximum-likelihood estimation, while Huizenga et al. (2002); De Munck et al. (2002); Bijma et al. (2003); De Munck et al. (2004); Ahn & Jun (2011); Jun et al. (2006) and Plis et al. (2006) assume more complex spatiotemporal noise covariance structures. A common limitation of these works is, however, that the noise level is not estimated as part of the source reconstruction problem on task-related data but from separate noise recordings. Our proposed algorithm substantially differs in this respect, as it learns the noise covariance jointly with the brain source distribution. Note that The idea of joint estimation of brain source activity and noise covariance has been previously proposed for Type-I learning methods in (Massias et al., 2018; Bertrand et al., 2019). In contrast to these Type-I methods, FUN is a Type-II method, which learns the prior source distribution as part of the model fitting. Type-II methods have been reported to yield consistently superior results than Type-I methods (Owen et al., 2012; Cai et al., 2019a; 2020a; Hashemi et al., 2020). Our numerical results show that the same hold also for FUN learning, which performs on par or better than existing variants from the Type-II family (including conventional Champagne) in this study. We plan to provide a formal comparison of the performance of noise learning within Type-I and Type-II estimation in our future work.
While being broadly applicable, our approach is also limited by a number of factors. Although Gaussian noise distributions are commonly justified, it would be desirable to also include more robust (e.g., heavy-tailed) non-Gaussian noise distributions in our framework. Another limitation is that the superior performance of the full-structure noise learning technique comes at the expense of higher computational complexity compared to the variants assuming homoscedastic or heteroscedastic strucutre. Besides, signals in real-world scenarios often lie in a lower-dimensional space compared to the original high-dimensional ambient space due to the particular correlations that inherently exist in the structure of the data. Therefore, imposing physiologically plausible constraints on the noise model, e.g., low-rank or Toeplitz structure, not only provides side information that can be leveraged for the reconstruction but also reduces the computational cost in two ways: a) by reducing the number of parameters and b) by taking advantage of efficient implementations using circular embeddings and the fast Fourier transform (Babu, 2016). Exploring efficient ways to incorporate these structural assumptions within a Riemannian framework is another direction of future work.
6 CONCLUSION
This paper proposes an efficient optimization algorithm for jointly estimating Gaussian regression parameter distributions as well as Gaussian noise distributions with full covariance structure within a hierarchical Bayesian framework. Using the Riemannian geometry of positive definite matrices, we derived an efficient algorithm for jointly estimating source and noise covariances. The benefits of our proposed framework were evaluated within an extensive set of experiments in the context of electromagnetic brain source imaging inverse problem and showed significant improvement upon state-of-the-art techniques in the realistic scenario where the noise has full covariance structure. The performance of our method is assessed through a real data analysis for the auditory evoked field (AEF) dataset.
A PROOF OF THEOREM 1
Proof. We start the proof by recalling equation 8:
LII(Γ,Λ) = − log p(Y|Γ,Λ) = log|Σy|+ 1
T T∑ t=1 y(t)>Σ−1y y(t) . (16)
The upper bound on the log |Σy| term can be directly inferred from the concavity of the logdeterminant function and its first-order Taylor expansion around the value from the previous iteration, Σky, which provides the following inequality (Sun et al., 2017, Example 2):
log |Σy| ≤ log ∣∣Σky∣∣+ tr [(Σky)−1 (Σy −Σky)]
= log ∣∣Σky∣∣+ tr [(Σky)−1 Σy]− tr [(Σky)−1 Σky] . (17)
Note that the first and last terms in equation 17 do not depend on Γ; hence, they can be ignored in the optimization procedure. Now, we decompose Σy into two terms, each of which only contains either the noise or source covariances:
tr [( Σky )−1 Σy ] = tr [( Σky )−1 ( Λ + LΓL> )] = tr [( Σky )−1 Λ ] + tr [( Σky )−1 LΓL> ] . (18)
In next step, we decompose the second term in equation 8, 1T ∑T t=1 y(t)
>Σ−1y y(t), into two terms, each of which is a function of either only the noise or only the source covariances. To this end, we exploit the following relationship between sensor and source space covariances:
1
T T∑ t=1 y(t)>Σ−1y y(t) = 1 T T∑ t=1 xk(t)>Γ−1xk(t) + 1 T T∑ t=1 (y(t)− Lxk(t))>Λ−1(y(t)− Lxk(t)) .
(19)
By combining equation 18 and equation 19, rearranging the terms, and ignoring all terms that do not depend on Γ, we have:
LII(Γ) ≤ tr [( Σky )−1 LΓL> ] + 1
T T∑ t=1 xk(t)>Γ−1xk(t) + const
= tr( ( CkS )−1 Γ) + tr(MkSΓ −1) + const = Lconvsource(Γ,Λk) + const , (20)
where CkS = ( L> ( Σky )−1 L )−1 and MkS = 1 T ∑T t=1 x
k(t)xk(t)>. Note that constant values in equation 20 do not depend on Γ; hence, they can be ignored in the optimization procedure. This
proves the equivalence of equation 8 and equation 9 when the optimization is performed with respect to Γ.
The equivalence of equation 8 and equation 11 can be shown analogously, with the difference that we only focus on noise-related terms in equation 18 and equation 19:
LII(Λ) ≤ tr [( Σky )−1 Λ ] + 1
T T∑ t=1 (y(t)− Lxk(t))>Λ−1(y(t)− Lxk(t)) + const
= tr( ( CkN )−1 Λ) + tr(MkNΛ −1) + const = Lconvnoise(Γk,Λ) + const , (21)
where CkN = Σ k y, and M k N = 1 T ∑T t=1(y(t) − Lxk(t))(y(t) − Lxk(t))>. Constant values in equation 21 do not depend on Λ; hence, they can again be ignored in the optimization procedure. Summarizing, we have shown that optimizing equation 8 is equivalent to optimizing Lconvnoise(Γk,Λ) and Lconvsource(Γ,Λk), which concludes the proof.
B PROOF OF THEOREM 2
Before presenting the proof, the subsequent definitions and propositions are required: Definition 4 (Geodesic path). Let M be a Riemannian manifold, i.e., a differentiable manifold whose tangent space is endowed with an inner product that defines local Euclidean structures. Then, a geodesic between two points onM, denoted by p0,p1 ∈M, is defined as the shortest connecting path between those two points along the manifold, ζl(p0,p1) ∈ M for l ∈ [0, 1], where l = 0 and l = 1 defines the starting and end points of the path, respectively.
In the current context, ζl(p0,p1) defines a geodesic curve on the positive definite (P.D.) manifold joining two P.D. matrices, P0,P1 > 0. The specific pairs of matrices we will deal with are {CkS,MkS} and {CkN,MkN}. Definition 5 (Geodesic on the P.D. manifold). Geodesics on the manifold of P.D. matrices can be shown to form a cone within the embedding space. We denote this manifold by S++. Assume two P.D. matrices P0,P1 ∈ S++. Then, for l ∈ [0, 1], the geodesic curve joining P0 to P1 is defined as (Bhatia, 2009, Chapter. 6):
ξl(P0,P1) = (P0) 1 2 ( (P0) −1/2P1(P0) −1/2 )l (P0) 1 2 l ∈ [0, 1] . (22)
Note that P0 and P1 are obtained as the starting and end points of the geodesic path by choosing l = 0 and l = 1, respectively. The midpoint of the geodesic, obtained by setting l = 12 , is called the geometric mean. Note that, according to Definition 5, the following equality holds :
ξl(Γ0,Γ1) −1 = ( (Γ0) 1/2 ( (Γ0) −1/2Γ1(Γ0) −1/2 )l (Γ0) 1/2 )−1 = ( (Γ0) −1/2 ( (Γ0) 1/2(Γ1) −1(Γ0) 1/2 )l (Γ0) −1/2 ) = ξl(Γ −1 0 ,Γ −1 1 ) . (23)
Definition 6 (Geodesic convexity). Let p0 and p1 be two arbitrary points on a subset A of a Riemannian manifoldM. Then a real-valued function f with domain A ⊂M with f : A → R is called geodesic convex (g-convex) if the following relation holds:
f (ζl(p0,p1)) ≤ lf(p0) + (1− l)f(p1) , (24) where l ∈ [0, 1] and ζ(p0,p1) denotes the geodesic path connecting two points p0 and p1 as defined in 4. Thus, in analogy to classical convexity, the function f is g-convex if every geodesic ζ(p0,p1) ofM between p0,p1 ∈ A, lies in the g-convex set A. Note that the set A ⊂M is called g-convex, if any geodesics joining an arbitrary pair of points lies completely in A. Remark 7. Note that g-convexity is a generalization of classical (linear) convexity to non-Euclidean (non-linear) geometry and metric spaces. Therefore, it is straightforward to show that all convex functions in Euclidean geometry are also g-convex, where the geodesics between pairs of matrices are simply line segments:
ζl(p0,p1) = lp0 + (1− l)p1 . (25)
For the sake of brevity, we omit a detailed theoretical introduction of g-convexity, and only borrow a result from Zadeh et al. (2016); Sra & Hosseini (2015). Interested readers are referred to Wiesel et al. (2015, Chapter 1) for a gentle introduction to this topic, and Papadopoulos (2005, Chapter. 2) Rapcsak (1991); Ben-Tal (1977); Liberti (2004); Pallaschke & Rolewicz (2013); Bonnabel & Sepulchre (2009); Moakher (2005); Sra & Hosseini (2016); Vishnoi (2018) for more in-depth technical details.
Now we are ready to state the proof, which parallels the one provided in Zadeh et al. (2016, Theorem. 3).
Proof. We only show the proof for Lconvsource(Γ,Λk). The proof for Lconvnoise(Γk,Λ) can be presented analogously; and therefore, is omitted here for brevity. We proceed in two steps. First, we limit our attention to P.D. manifolds and express equation 24 in terms of geodesic paths and functions that lie on this particular space. We then show that Lconvsource(Γ,Λk) is strictly g-convex on this specific domain. In the second step, we then derive the updates rules proposed in equation 13 and equation 14.
B.1 PART I: PROVING G-CONVEXITY OF THE MAJORIZING COST FUNCTIONS
We consider geodesics along the P.D. manifold by setting ζl(p0,p1) to ξl(Γ0,Γ1) as presented in Definition 5, and define f(.) to be f(Γ) = tr(CkSΓ) + tr(M k SΓ −1), representing the cost function Lconvsource(Γ,Λk). We now show that f(Γ) is strictly g-convex on this specific domain. For continuous functions as considered in this paper, fulfilling equation 24 for f(Γ) and ξl(Γ0,Γ1) with l = 1/2 is sufficient to prove strict g-convexity:
tr ( CkSξ1/2(Γ0,Γ1) ) + tr ( MkSξ1/2(Γ0,Γ1) −1 )
< 1 2 tr ( CkSΓ0 ) + 1 2 tr ( MkSΓ0 −1) + 1
2 tr ( CkSΓ1 ) + 1 2 tr ( MkSΓ1 −1) . (26) Given CkS ∈ S++, i.e., CkS > 0 and the operator inequality (Bhatia, 2009, Chapter. 4)
ξ1/2(Γ0,Γ1) ≺ 1
2 Γ0 +
1 2 Γ1 , (27)
we have:
tr ( CkSξ1/2(Γ0,Γ1) ) < 1 2 tr ( CkSΓ0 ) + 1 2 tr ( CkSΓ1 ) , (28)
which is derived by multiplying both sides of equation 27 with CkS followed by taking the trace on both sides.
Similarly, we can write the operator inequality for {Γ−10 ,Γ −1 1 } using equation 23 as:
ξ1/2(Γ0,Γ1) −1 = ξ1/2(Γ −1 0 ,Γ −1 1 ) ≺
1 2 Γ−10 + 1 2 Γ−11 , (29)
Multiplying both sides of equation 29 by MkS ∈ S++, and applying the trace operator on both sides leads to:
tr ( MkSξ1/2(Γ0,Γ1) −1 ) < 1 2 tr ( MkSΓ0 −1)+ 1 2 tr ( MkSΓ1 −1) . (30) Summing up equation 28 and equation 30 proves equation 26 and concludes the first part of the proof.
B.2 PART II: DETAILED DERIVATION OF THE UPDATE RULES IN EQUATIONS 13 AND 14
We now present the second part of the proof by deriving the update rules in equations 13 and 14. Since the cost function Lconvsource(Γ,Λk) is strictly g-convex, its optimal solution in the k-th iteration
is unique. More concretely, the optimum can be analytically derived by taking the derivative of equation 9 and setting the result to zero as follows:
∇Lconvsource(Γ,Λk) = ( CkS )−1 − Γ−1MkSΓ−1 = 0 , (31)
which results in
Γ ( CkS )−1 Γ = MkS . (32)
This solution is known as the Riccati equation, and is the geometric mean between CkS and M k S (Davis et al., 2007; Bonnabel & Sepulchre, 2009):
Γk+1 = (CkS) 1 2 ( (CkS) −1/2MkS(C k S) −1/2 ) 1 2 (CkS) 1 2 .
The update rule for the full noise covariance matrix can be derived analogously:
Λk+1 = (CkN) 1 2 ( (CkN) −1/2MkN(C k N) −1/2 ) 1 2 (CkN) 1 2 .
Remark 8. Note that the obtained update rules are closed-form solutions for the surrogate cost functions, equations 9 and 11, which stands in contrast to conventional majorization minimization algorithms (see section C in the appendix), which require iterative procedures in each step of the optimization.
Deriving the update rules in equation 13 and equation 14 concludes the second part of the proof of Theorem 2.
C PROOF OF THEOREM 3
In the following, we provide proof for Theorem 3 by showing that alternating update rules for Γ and Λ in equation 13 and equation 14 are guaranteed to converge to a local minimum of the Bayesian Type-II likelihood equation 8. In particular, we will prove that FUN learning is an instance of the general class of majorization-minimization (MM) algorithms, for which this property follows by construction. To this end, we first briefly review theoretical concepts behind the majorizationminimization (MM) algorithmic framework (Hunter & Lange, 2004; Razaviyayn et al., 2013; Jacobson & Fessler, 2007; Wu et al., 2010).
C.1 REQUIRED CONDITIONS FOR MAJORIZATION-MINIMIZATION ALGORITHMS
MM encompasses a family of iterative algorithms for optimizing general non-linear cost functions. The main idea behind MM is to replace the original cost function in each iteration by an upper bound, also known as majorizing function, whose minimum is easy to find. The MM class covers a broad range of common optimization algorithms such as convex-concave procedures (CCCP) and proximal methods (Sun et al., 2017, Section IV), (Mjolsness & Garrett, 1990; Yuille & Rangarajan, 2003; Lipp & Boyd, 2016). Such algorithms have been applied in various domains such as brain source imaging (Hashemi & Haufe, 2018; Bekhti et al., 2018; Cai et al., 2020a; Hashemi et al., 2020), wireless communication systems with massive MIMO technology (Masood et al., 2016; Haghighatshoar & Caire, 2017; Khalilsarai et al., 2020), and non-negative matrix factorization (Fagot et al., 2019). Interested readers are referred to Sun et al. (2017) for an extensive list of applications on MM.
The problem of minimizing a continuous function f(u) within a closed convex set U ⊂ Rn:
min u f(u) subject to u ∈ U , (33)
within the MM framwork can be summarized as follows. First, construct a continuous surrogate function g(u|uk) that majorizes, or upper-bounds, the original function f(u) and coincides with f(u) at a given point uk:
[A1] g(uk|uk) = f(uk) ∀ uk ∈ U [A2] g(u|uk) ≥ f(u) ∀ u,uk ∈ U .
Second, starting from an initial value u0, generate a sequence of feasible points u1,u2, . . . ,uk,uk+1 as solutions of a series of successive simple optimization problems, where
[A3] uk+1 := arg min u∈U g(u|uk) .
If a surrogate function fulfills conditions [A1]–[A3], then the value of the cost function f decreases in each iteration: f(uk+1) ≤ f(uk). For the smooth functions considered in this paper, we further require that the derivatives of the original and surrogate functions coincide at uk:
[A4] ∇g(uk|uk) = ∇f(uk) ∀ uk ∈ U .
We can then formulate the following theorem:
Theorem 9. Assume that an MM algorithm fulfills conditions [A1]–[A4]. Then, every limit point of the sequence of minimizers generated in [A3], is a stationary point of the original optimization problem in equation 33.
Proof. A detailed proof is provided in Razaviyayn et al. (2013, Theorem 1).
C.2 DETAIL DERIVATION OF THE PROOF OF THEOREM 3
We now show that FUN learning is an instance of majorization-minimization as defined above, which fulfills Theorem 9.
Proof. We need to prove that conditions [A1]–[A4] are fulfilled for FUN learning. To this end, we recall the upper bound on log |Σy| in equation 17, which fulfills condition [A2] since it majorizes log |Σy| as a result of the concavity of the log-determinant function and its first-order Taylor expansion around Σky. Besides, it automatically satisfies conditions [A1] and [A4] by construction, because the majorizing function in equation 17 is obtained through a Taylor expansion around Σky. Concretely, [A1] is satisfied because the equality in equation 17 holds for Σy = Σky. Similarly, [A4]
is satisfied because the gradient of log |Σy| at point Σky, ( Σky )−1 defines the linear Taylor approxi-
mation log ∣∣Σky∣∣+ tr [(Σky)−1 (Σy −Σky)]. Thus, both gradients coincide in Σky by construction.
Now, we prove that [A3] can be satisfied by showing that Lconvsource(Γ,Λk) reaches its global minimum in each MM iteration. This is guaranteed if Lconvsource(Γ,Λk) can be shown to be convex or g-convex with respect to Γ. To this end, we first require the subsequent proposition:
Proposition 10. Any local minimum of a g-convex function over a g-convex set is a global minimum.
Proof. A detailed proof is presented in Rapcsak (1991, Theorem 2.1).
Given the proof presented in appendix B.1, we can conclude that equation 20 is g-convex; hence, any local minimum ofLconvsource(Γ,Λk) is a global minimum according to Proposition 10. This proves that condition [A3] is fulfilled and completes the proof that the optimization of equation 8 with respect to Γ using the convex surrogate cost function equation 9 leads to an MM algorithm. For the sake of brevity, we omit the proof for the optimization with respect to Λ based on the convex surrogate function in equation 11, Lconvnoise(Γk,Λ), as it can be presented, analogously.
D DERIVATION OF CHAMPAGNE AS A SPECIAL CASE OF FUN LEARNING
We start the derivation of update rule equation 15 by constraining Γ to the set of diagonal matrices W: Γ = diag(γ), where γ = [γ1, . . . , γN ]>. We continue by rewriting the constrained optimization with respect to the source covariance matrix,
Γk+1 = arg min Γ∈W, Λ=Λk tr(CkSΓ) + tr(M k SΓ −1) , (34)
as follows:
γk+1 = arg min γ, Λ=Λk
diag [( CkS )−1] γ + diag [ MkS ] γ−1︸ ︷︷ ︸
Ldiagsource(γ|γk)
, (35)
where γ−1 = [γ−11 , . . . , γ −1 N ] > is defined as the element-wise inversion of γ. The optimization with respect to the scalar source variances is then carried out by taking the derivative of equation 35 with respect to γn, for n = 1, . . . , N , and setting it to zero,
∂
∂γn
([( CkS )−1] γn + [ MkS ] γ−1n ) = [( CkS )−1]
n,n − 1 (γn)2 [ MkS ] n,n
= 0 for n = 1, . . . , N ,
where Ln denotes the n-th column of the lead field matrix. This yields the following update rule
γk+1n ← √√√√√ [ MkS ] n,n[(
CkS )−1]
n,n
= √√√√ 1T ∑Tt=1(xkn(t))2 L>n ( Σky )−1 Ln for n = 1, . . . , N ,
which is identical to the update rule of Champagne (Wipf & Nagarajan, 2009).
E DERIVATION OF CHAMPAGNE WITH HETEROSCEDASTIC NOISE LEARNING
AS A SPECIAL CASE OF FUN LEARNING
Similar to Appendix D, we start by constraining Λ to the set of diagonal matricesW: Λ = diag(λ), where λ = [λ1, . . . , λM ]>. We continue by reformulating the constrained optimization with respect to the noise covariance matrix,
Λk+1 = arg min Λ∈W, Γ=Γk tr(CkNΛ) + tr(M k NΛ −1) , (36)
as follows:
λk+1 = arg min λ, Γ=Γk
diag [( CkN )−1] λ + diag [ MkN ] λ−1︸ ︷︷ ︸
Ldiagnoise(λ|λk)
, (37)
where λ−1 = [λ−11 , . . . , λ −1 M ] > is defined as the element-wise inversion of λ. The optimization with respect to the scalar noise variances then proceeds by taking the derivative of equation 37 with respect to λm, for m = 1, . . . ,M , and setting it to zero,
∂
∂λm
([( CkN )−1] λm + [ MkN ] λ−1m ) = [( CkN )−1]
m,m − 1 (λm)2 [ MkN ] m,m
= 0 for m = 1, . . . ,M .
This yields the following update rule:
λk+1m ← √√√√√ [ MkN ] m,m[(
CkN )−1]
m,m
= √√√√√√ [ 1 T ∑T t=1(y(t)− Lxk(t))(y(t)− Lxk(t))> ] m,m[(
Σky )−1]
m,m
for m = 1, . . . ,M , (38)
which is identical to the update rule of the Champagne with heteroscedastic noise learning as presented in Cai et al. (2020a).
F u
lls tr
u c tu
re N
o is e H e te ro s c e d a s ti c N o is e
Figure 4: Accuracy of the noise covariance matrix reconstruction incurred by three different noise learning approaches assuming homoscedastic (red), heteroscedastic (green) and full-structure (blue) noise covariances. The ground-truth noise covariance matrix is either full-structure (upper row) or heteroscedastic diagonal (lower row). Performance is assessed in terms of the Pearson correlation between the entries of the original and reconstructed noise covariance matrices, Λ and Λ̂, denoted by Λsim (left column). Shown is the similarity error 1 − Λsim. Further, the normalized mean squared error (NMSE) between Λ and Λ̂, defined as NMSE = ||Λ̂−Λ||2F /||Λ||2F is reported (right column).
F PSEUDO-EEG SIGNAL GENERATION
Our simulation setting is an adoption of the EEG inverse problem, where brain activity is to be reconstructed from simulated pseudo-EEG data (Haufe & Ewald, 2016).
Forward Modeling: Populations of pyramidal neurons in the cortical gray matter are known to be the main drivers of the EEG signal (Hämäläinen et al., 1993; Baillet et al., 2001). Here, we use a realistic volume conductor model of the human head to model the linear relationship between primary electrical source currents generated within these populations and the resulting scalp surface potentials captured by EEG electrodes. The lead field matrix, L ∈ R58×2004, was generated using the New York Head model (Huang et al., 2016) taking into account the realistic anatomy and electrical tissue conductivities of an average human head. In this model, 2004 dipolar current sources were placed evenly on the cortical surface and 58 sensors were considered. The lead field matrix, L ∈ R58×2004 was computed using the finite element method. Note that the orientation of all source currents was fixed to be perpendicular to the cortical surface, so that only scalar source amplitudes needed to be estimated.
Evaluation Metrics: Source reconstruction performance was evaluated according to the following metrics. First, the earth mover’s distance (EMD) (Rubner et al., 2000; Haufe et al., 2008)) was used to quantify the spatial localization accuracy. The EMD measures the cost needed to transform two probability distributions defined on the same metric domain (in this case, distributions of the true and estimated sources defined in 3D Euclidean brain space) into each other. EMD scores were normalized to [0, 1]. Second, the error in the reconstruction of the source time courses was measured. To this end, Pearson correlation between all pairs of simulated and reconstructed (i.e., those with non-zero activations) source time courses was assessed as the mean of the absolute correlations obtained for each source, after optimally matching simulated and reconstructed sources based on maximal absolute correlation. We also report another metric for evaluating the localization error as the average Euclidean distance (EUCL) (in mm) between each simulated source and the best (in terms of absolute correlations) matching reconstructed source. For assessing the recovery of the true support, we also compute F1-measure scores (Chinchor & Sundheim, 1993; van Rijsbergen, 1979): F1 = 2×TP/P+TP+FP , where P denotes the number of true active sources, while TP and FP are the numbers of true and false positive predictions. Note that perfect support recovery, i.e., F1 = 1, is only achieved when there is a perfect correspondence between ground-truth and estimated support.
To evaluate the accuracy of the noise covariance matrix estimation, the following two metrics were calculated: the Pearson correlation between the original and reconstructed noise covariance matrices, Λ and Λ̂, denoted by Λsim, and the normalized mean squared error (NMSE) between Λ and Λ̂, defined as: NMSE = ||Λ̂ − Λ||2F /||Λ||2F . Similarity error was then defined as one minus the Pearson correlation: 1 − Λsim. Note that NMSE measures the reconstruction of the true scale of the noise covariance matrix, while Λsim is scale-invariant and hence only quantifies the overall structural similarity between simulated and estimated noise covariance matrices.
Evaluating the accuracy of the noise covariance matrix estimation: Figure 4 depicts the accuracy with which the covariance matrix is reconstructed by three different noise learning approaches assuming noise with homoscedastic (red), heteroscedastic (green) and full (blue) structure. The ground truth noise covariance matrix either had full (upper row) or heteroscedastic (lower row) structure. Performance was measured in terms of similarity error and NMSE. Similar to the trend observed in Figure 2, full-structure noise learning leads to better noise covariance estimation accuracy (lower NMSE and similariy error) for the full-structure noise model, while superior reconstruction performance is achieved for heteroscedastic noise learning when true noise covariance is heteroscedastic.
G FURTHER DETAILS ON AUDITORY EVOKED FIELDS (AEF) DATASET
The MEG data used in this article were acquired in the Biomagnetic Imaging Laboratory at the University of California San Francisco (UCSF) with a CTF Omega 2000 whole-head MEG system from VSM MedTech (Coquitlam, BC, Canada) with 1200 Hz sampling rate. The lead field for each subject was calculated with NUTMEG (Dalal et al., 2004) using a single-sphere head model (two spherical orientation lead fields) and an 8 mm voxel grid. Each column was normalized to have a norm of unity. The neural responses of one subject to an Auditory Evoked Fields (AEF) stimulus were localized. The AEF response was elicited with single 600 ms duration tones (1 kHz) presented binaurally. 120 trials were collected for AEF dataset. The data were first digitally filtered from 1 to 70 Hz to remove artifacts and DC offset, time-aligned to the stimulus, and then averaged across the following number of trials:{1,2,12, 63,120}. The pre-stimulus window was selected to be 100 ms to 5 ms and the post-stimulus time window was selected to be 60 ms to 180 ms, where 0 ms is the onset of the tone (Wipf et al., 2010; Dalal et al., 2011; Owen et al., 2012; Cai et al., 2019a). | 1. What is the main contribution of the paper, and how does it relate to previous works in the field?
2. What are the strengths and weaknesses of the proposed optimization method for estimating the full noise covariance?
3. How does the reviewer assess the novelty and contribution of the paper, particularly in comparison to other papers in the field?
4. What are the limitations of the paper's approach, especially regarding its applicability to real-world data?
5. Are there any concerns about the paper's experimental design or methodology? | Review | Review
The paper proposes an efficient optimization method for estimating the full noise covariance in a hierarchical Bayesian framework. It's shown in the experiment that the optimization method could recover the true noise covariance in a simulated example and estimating the full covariance has better performance than homo- and heteroscedastic covariance.
I think the proposed method is an effective tool to estimate the full noise covariance especially for the problem setting in this paper. But the overall novelty and contribution are not strong enough for the ICLR community.
Papers in fMRI literature [Michael Shvartsman et al 2017, Anqi Wu et al 2019] have proposed to work with full noise covariance in more complicated models such as factor analysis, Gaussian process regression. The basic model in this paper is a bit too simple compared with other models preventing from making significant methodological contributions. It might fit a signal processing or brain source imaging specialized publication better.
Also in many applications (especially with brain data), it's shown that a full rank noise covariance is not always preferable given that there are usually some correlations among measurements that lead to lower dimensional subspace at the noise level. So I'm not quite sure whether a full covariance without any structural or subspace assumption would really outperform low-rank full covariance when applying to the real data.
Another issue in this paper is there is no real data application. I'm not very convinced that simulated data generated from a realistic lead field matrix is considered as the real-world data. |
ICLR | Title
Joint Learning of Full-structure Noise in Hierarchical Bayesian Regression Models
Abstract
We consider hierarchical Bayesian (type-II maximum likelihood) models for observations with latent variables for source and noise, where both hyperparameters need to be estimated jointly from data. This problem has application in many domains in imaging including biomagnetic inverse problems. Crucial factors influencing accuracy of source estimation are not only the noise level but also its correlation structure, but existing approaches have not addressed estimation of noise covariance matrices with full structure. Here, we consider the reconstruction of brain activity from electroencephalography (EEG). This inverse problem can be formulated as a linear regression with independent Gaussian scale mixture priors for both the source and noise components. As a departure from classical sparse Bayesan learning (SBL) models where across-sensor observations are assumed to be independent and identically distributed, we consider Gaussian noise with full covariance structure. Using Riemannian geometry, we derive an efficient algorithm for updating both source and noise covariance along the manifold of positive definite matrices. Using the majorization-maximization framework, we demonstrate that our algorithm has guaranteed and fast convergence. We validate the algorithm both in simulations and with real data. Our results demonstrate that the novel framework significantly improves upon state-of-the-art techniques in the real-world scenario where the noise is indeed non-diagonal and fully-structured.
1 INTRODUCTION
Having precise knowledge of the noise distribution is a fundamental requirement for obtaining accurate solutions in many regression problems (Bungert et al., 2020). In many applications however, it is impossible to separately estimate this noise distribution, as distinct ”noise-only” (baseline) measurements are not feasible. An alternative, therefore, is to design estimators that jointly optimize over the regression coefficients as well as over parameters of the noise distribution. This has been pursued both in a (penalized) maximum-likelihood settings (here referred to as Type-I approaches) (Petersen & Jung, 2020; Bertrand et al., 2019; Massias et al., 2018) as well as in hierarchical Bayesian settings (referred to as Type-II) (Wipf & Rao, 2007; Zhang & Rao, 2011; Hashemi et al., 2020; Cai et al., 2020a). Most contributions in the literature are, however, limited to the estimation of only a diagonal noise covariance (i.e., independent between different measurements) (Daye et al., 2012; Van de Geer et al., 2013; Dalalyan et al., 2013; Lederer & Muller, 2015). Considering a diagonal noise covariance is a limiting assumption in practice as the noise interference in many realistic scenarios are highly correlated across measurements; and thus, have non-trivial off-diagonal elements.
This paper develops an efficient optimization algorithm for jointly estimating the posterior of regression parameters as well as the noise distribution. More specifically, we consider linear regression with Gaussian scale mixture priors on the parameters and a full-structure multivariate Gaussian noise. We cast the problem as a hierarchical Bayesian (type-II maximum-likelihood) regression problem, in which the variance hyperparameters and the noise covariance matrix are optimized by maximizing the Bayesian evidence of the model. Using Riemannian geometry, we derive an efficient algorithm for jointly estimating the source and noise covariances along the manifold of positive definite (P.D.) matrices.
To highlight the benefits of our proposed method in practical scenarios, we consider the problem of electromagnetic brain source imaging (BSI). The goal of BSI is to reconstruct brain activity
from magneto- or electroencephalography (M/EEG), which can be formulated as a sparse Bayesian learning (SBL) problem. Specifically, it can be cast as a linear Bayesian regression model with independent Gaussian scale mixture priors on the parameters and noise. As a departure from the classical SBL approaches, here we specifically consider Gaussian noise with full covariance structure. Prominent source of correlated noise in this context are, for example, eye blinks, heart beats, muscular artifacts and line noise. Other realistic examples for the need for such full-structure noise can be found in the areas of array processing (Li & Nehorai, 2010) or direction of arrival (DOA) estimation (Chen et al., 2008). Algorithms that can accurately estimate noise with full covariance structure are expected to achieve more accurate regression models and predictions in this setting.
2 TYPE-II BAYESIAN REGRESSION
We consider the linear model Y = LX + E, in which a forward or design matrix, L ∈ RM×N , is mapped to the measurements, Y, by a set of coefficients or source components, X. Depending on the setting, the problem of estimating X given L and Y is called an inverse problem in physics, a multitask regression problem in machine learning, or a multiple measurement vector (MMV) recovery problem in signal processing (Cotter et al., 2005). Adopting a signal processing terminology, the measurement matrix Y ∈ RM×T captures the activity of M sensors at T time instants, y(t) ∈ RM×1, t = 1, . . . , T , while the source matrix, X ∈ RN×T , consists of the unknown activity of N sources at the same time instants, x(t) ∈ RN×1, t = 1, . . . , T . The matrix E = [e(1), . . . , e(T )] ∈ RM×T represents T time instances of zero-mean Gaussian noise with full covariance Λ, e(t) ∈ RM×1 ∼ N (0,Λ), t = 1, . . . , T , which is assumed to be independent of the source activations. In this paper, we focus on M/EEG based brain source imaging (BSI) but the proposed algorithm can be used in general regression settings, in particular for sparse signal recovery (Candès et al., 2006; Donoho, 2006) with a wide range of applications (Malioutov et al., 2005). The goal of BSI is to infer the underlying brain activity X from the EEG/MEG measurement Y given a known forward operator, called lead field matrix L. As the number of sensors is typically much smaller than the number of locations of potential brain sources, this inverse problem is highly ill-posed. This problem is addressed by imposing prior distributions on the model parameters and adopting a Bayesian treatment. This can be performed either through Maximum-a-Posteriori (MAP) estimation (Type-I Bayesian learning) (Pascual-Marqui et al., 1994; Gorodnitsky et al., 1995; Haufe et al., 2008; Gramfort et al., 2012; Castaño-Candamil et al., 2015) or, when the model has unknown hyperparameters, through Type-II Maximum-Likelihood estimation (Type-II Bayesian learning) (Mika et al., 2000; Tipping, 2001; Wipf & Nagarajan, 2009; Seeger & Wipf, 2010; Wu et al., 2016).
In this paper, we focus on Type-II Bayesian learning, which assumes a family of prior distributions p(X|Θ) parameterized by a set of hyperparameters Θ. These hyper-parameters can be learned from the data along with the model parameters using a hierarchical Bayesian approach (Tipping, 2001; Wipf & Rao, 2004) through the maximum-likelihood principle:
ΘII := arg max Θ p(Y|Θ) = arg max Θ
∫ p(Y|X,Θ)p(X|Θ)dX . (1)
Here we assume a zero-mean Gaussian prior with full covariance Γ for the underlying source distribution, x(t) ∈ RN×1 ∼ N (0,Γ), t = 1, . . . , T . Just as most other approaches, Type-II Bayesian learning makes the simplifying assumption of statistical independence between time samples. This leads to the following expression for the distribution of the sources and measurements:
p(X|Γ) = T∏ t=1 p(x(t)|Γ) = T∏ t=1 N (0,Γ) (2)
p(Y|X) = T∏ t=1 p(y(t)|x(t)) = T∏ t=1 N (Lx(t),Λ) . (3)
The parameters of the Type-II model, Θ, are the unknown source and noise covariances, i.e., Θ = {Γ,Λ}. The unknown parameters Γ and Λ are optimized based on the current estimates of the source and noise covariances in an alternating iterative process. Given initial estimates of Γ and Λ,
the posterior distribution of the sources is a Gaussian of the form (Sekihara & Nagarajan, 2015)
p(X|Y,Γ) = T∏ t=1 N (µx(t),Σx) ,where (4)
µx(t) = ΓL >(Σy) −1y(t) (5)
Σx = Γ− ΓL>(Σy)−1LΓ (6) Σy = Λ + LΓL
> . (7) The estimated posterior parameters µx(t) and Σx are then in turn used to update Γ and Λ as the minimizers of the negative log of the marginal likelihood p(Y|Γ,Λ), which is given by (Wipf et al., 2010):
LII(Γ,Λ) = − log p(Y|Γ,Λ) = log|Σy|+ 1
T T∑ t=1 y(t)>Σ−1y y(t)
= log|Λ + LΓL>|+ 1 T T∑ t=1 y(t)> ( Λ + LΓL> )−1 y(t) , (8)
where | · | denotes the determinant of a matrix. This process is repeated until convergence. Given the final solution of the hyperparameters ΘII = {ΓII,ΛII}, the posterior source distribution is obtained by plugging these estimates into equations 3 to 6.
3 PROPOSED METHOD: FULL-STRUCTURE NOISE (FUN) LEARNING
Here we propose a novel and efficient algorithm, full-structure noise (FUN) learning, which is able to learn the full covariance structure of the noise jointly within the Bayesian Type-II regression framework. We first formulate the algorithm in its most general form, in which both the noise distribution and the prior have full covariance structure. Later, we make the simplifying assumption of independent source priors, leading to the pruning of the majority of sources. This effect, which has also been referred to as automatic relevance determination (ARD) or sparse Bayesian learning (SBL) is beneficial in our application of interest, namely the reconstruction of parsimonious sets of brain sources underlying experimental EEG measurements.
Note that the Type-II cost function in equation 8 is non-convex and thus non-trivial to optimize. A number of iterative algorithms such as majorization-minimization (MM) (Sun et al., 2017) have been proposed to address this challenge. Following the MM scheme, we first construct convex surrogate functions that majorizes LII(Γ,Λ) in each iteration of the optimization algorithm. Then, we show the minimization equivalence between the constructed majoring functions and equation 8. This result is presented in the following theorem: Theorem 1. Let Λk and Σky be fixed values obtained in the (k)-th iteration of the optimization algorithm minimizing LII(Γ,Λ). Then, optimizing the non-convex type-II ML cost function in equation 8, LII(Γ,Λ), with respect to Γ is equivalent to optimizing the following convex function, which majorizes equation 8:
Lconvsource(Γ,Λk) = tr( ( CkS )−1 Γ) + tr(MkSΓ −1) , (9)
where CkS and M k S are defined as:
CkS := ( L> ( Σky )−1 L )−1 , MkS := 1
T T∑ t=1 xk(t)xk(t)> . (10)
Similarly, optimizing LII(Γ,Λ) with respect to Λ is equivalent to optimizing the following convex majorizing function:
Lconvnoise(Γk,Λ) = tr( ( CkN )−1 Λ) + tr(MkNΛ −1) , (11)
where CkN and M k N are defined as:
CkN := ( Σky ) , MkN := 1
T T∑ t=1 (y(t)− Lxk(t))(y(t)− Lxk(t))> . (12)
Proof. The proof is presented in Appendix A.
We continue by considering the optimization of the cost functionsLconvsource(Γ,Λk) andLconvnoise(Γk,Λ) with respect to Γ and Λ, respectively. Note that in case of source covariances with full structure, the solution of Lconvsource(Γ,Λk) with respect to Γ lies in the (N2 − N)/2 Riemannian manifold of positive definite (P.D.) matrices. This consideration enables us to invoke efficient methods from Riemannian geometry (see Petersen et al., 2006; Berger, 2012; Jost & Jost, 2008), which ensures that the solution at each step of the optimization is contained within the lower-dimensional solution space. Specifically, in order to optimize for the source covariance, the algorithm calculates the geometric mean between the previously obtained statistical model source covariance, CkS, and the source-space sample covariance matrix, MkS, in each iteration. Analogously, to update the noise covariance estimate, the algorithm calculates the geometric mean between the model noise covariance, CkN, and the empirical sensor-space residuals, M k N. The update rules obtained from this algorithm are presented in the following theorem:
Theorem 2. The cost functions Lconvsource(Γ,Λk) and Lconvnoise(Γk,Λ) are both strictly geodesically convex with respect to the P.D. manifold, and their optimal solution with respect to Γ and Λ, respectively, can be attained according to the two following update rules:
Γk+1 ← (CkS) 1 2 ( (CkS) −1/2MkS(C k S) −1/2 ) 1 2 (CkS) 1 2 , (13)
Λk+1 ← (CkN) 1 2 ( (CkN) −1/2MkN(C k N) −1/2 ) 1 2 (CkN) 1 2 . (14)
Proof. A detailed proof can be found in Appendix B.
Convergence of the resulting algorithm is shown in the following theorem.
Theorem 3. Optimizing the non-convex type-II ML cost function in equation 8, LII(Γ,Λ) with alternating update rules for Γ and Λ in equation 13 and equation 14 leads to an MM algorithm with guaranteed convergence guarantees.
Proof. A detailed proof can be found in Appendix C.
While Theorems 1–3 reflect a general joint learning algorithm, the assumption of sources with full covariance structure is often relaxed in practice. The next section will shed light on this important simplification by making a formal connection to SBL algorithms.
3.1 SPARSE BAYESIAN LEARNING WITH FULL NOISE MODELING
In brain source imaging, the assumption of full source covariance is often relaxed. Even if, technically, most parts of the brain are active at all times, and the concurrent activations of different brain regions can never be assumed to be fully uncorrelated, there are many experimental settings in which it is reasonable to assume only a small set of independent brain sources. Such sparse solutions are physiologically plausible in task-based analyses, where only a fraction of the brain’s macroscopic structures is expected to be consistently engaged. A common strategy in this case is to model independent sources through a diagonal covariance matrix. In the Type-II Bayesian learning framework, this simplification interestingly leads to sparsity of the resulting source distributions, as, at the optimum, many of the estimated source variances are zero. This mechanism is known as sparse Bayesian learning and is closely related to the more general concept of automatic relevance determination. Here, we adopt the SBL assumption for the sources, leading to Γ-updates previously described in the BSI literature under the name Champagne (Wipf & Nagarajan, 2009). As a novelty and main focus of this paper, we here equip the SBL framework with the capability to jointly learn full noise covariances through the geometric mean based update rule in equation 14. In the SBL framework, the N modeled brain sources are assumed to follow independent univariate Gaussian distributions with zero mean and distinct unknown variances γn: xn(t) ∼ N (0, γn), n = 1, . . . , N . In the SBL solution, the majority of variances is zero, thus effectively inducing spatial sparsity of the corresponding source activities. For FUN learning, we also impose a diagonal structure on the source covariance matrix, Γ = diag(γ), where γ = [γ1, . . . , γN ]>. By constraining Γ in equation 9
Algorithm 1: Full-structure noise (FUN) learning Input: The lead field matrix L ∈ RM×N and the measurement vectors y(t) ∈ RM×1, t = 1, . . . , T . Result: The estimated prior source variances [γ1, . . . , γN ]>, noise covariance Λ, the posterior
mean µx(t) and covariance Σx of the sources. 1 Set a random initial value for Λ as well as γ = [γ1, . . . , γN ]>, and construct Γ = diag(γ). 2 Calculate the statistical covariance Σy = Λ + LΓL>.
Repeat 3 Calculate the posterior mean as µx(t) = ΓL>(Σy)−1y(t). 4 Calculate CkS and M k S based on equation 10, and update γn for n = 1, . . . , N based on equation 15. 5 Calculate CkN and M k N based on equation 12, and update Λ based on equation 14.
Until stopping condition is satisfied; 6 Calculate the posterior covariance as Σx = Γ− ΓL>(Σy)−1LΓ.
to the set of diagonal matrices, W , we can show that the update rule equation 13 for the source variances simplifies to the following form:
γk+1n ← √√√√√ [ MkS ] n,n[(
CkS )−1]
n,n
= √√√√ 1T ∑Tt=1(xkn(t))2 L>n ( Σky )−1 Ln for n = 1, . . . , N , (15)
where Ln denotes the n-th column of the lead field matrix. Interestingly, equation 15 is identical to the update rule of the Champagne algorithm. A detailed derivation of equation 15 can be found in Appendix D.
Summarizing, the FUN learning approach, just like Champagne and other SBL algorithms, assumes independent Gaussian sources with individual variances (thus, diagonal source covariances), which are updated through equation equation 15. Departing from the classical SBL setting, which assumes the noise distribution to be known, FUN models noise with full covariance structure, which is updated using equation 14. Algorithm 1 summarizes the used update rules.
Note that various recent Type-II noise learning schemes for diagonal noise covariance matrices (Hashemi et al., 2020; Cai et al., 2020a) that are rooted in the concept of SBL can be also derived as special cases of FUN learning assuming diagonal source and noise covariances, i.e., Γ,Λ ∈ W . Specifically imposing diagonal structure on the noise covariance matrix for the FUN algorithm, Λ, results in identical noise variance update rules as derived in Cai et al. (2020a) for heteroscedastic, and in Hashemi et al. (2020) for homoscedastic noise. We explicitly demonstrate this connection in Appendix E. Here, we note that heteroscedasticity refers to the common phenomenon that measurements are contaminated with non-uniform noise levels across channels, while homoscedasticity only accounts for uniform noise levels.
4 NUMERICAL SIMULATIONS AND REAL DATA ANALYSIS
Source, Noise and Forward Model: We simulated a sparse set of N0 = 5 active brain sources that were placed at random positions on the cortex. To simulate the electrical neural activity of these sources, T = 200 identically and independently distributed (i.i.d) points were sampled from a Gaussian distribution, yielding sparse source activation vectors x(t). The resulting source distribution, represented as X = [x(1), . . . ,x(T )], was projected to the EEG sensors through application of lead field matrix as the forward operator: Ysignal = LX. The lead field matrix, L ∈ R58×2004, was generated using the New York Head model (Huang et al., 2016) taking into account the realistic anatomy and electrical tissue conductivities of an average human head. Further details regarding forward modeling is provided in Appendix F. Gaussian additive noise was randomly sampled from a zero-mean normal distribution with full covariance matrix Λ: e(t) ∈ RM×1 ∼ N (0,Λ), t = 1, . . . , T . This setting is further referred to as full-structure noise. Note that we also generated noise with diagonal covariance matrix, referred to as heteroscedastic noise, in order to investigate the effect of model violation on reconstruction performance. The
noise matrix E = [e(1), . . . , e(T )] ∈ RM×T was normalized by it Frobenius norm and added to the signal matrix Ysignal as follows: Y = Ysignal + ( (1−α)‖Ysignal‖ F/α‖E‖F ) E, where α determines the signal-to-noise ratio (SNR) in sensor space. Precisely, SNR is obtained as follows: SNR = 20log10 (α/1−α). In the subsequently described experiments the following values of α were used: α={0.3, 0.35, 0.4, 0.45, 0.5, 0.55, 0.65, 0.7, 0.8}, which correspond to the following SNRs: SNR={-12, -7.4, -5.4, -3.5, -1.7, 0, 1.7, 3.5, 5.4, 7.4, 12} (dB). MATLAB codes for producing the results in the simulation study are uploaded here.
Evaluation Metrics and Simulation Set-up: We applied the full-structure noise learning approach on the synthetic datasets described above to recover the locations and time courses of the active brain sources. In addition to our proposed approach, two further Type-II Bayesian learning schemes, namely Champagne with homo- and heteroscedastic noise learning (Hashemi et al., 2020; Cai et al., 2020a), were also included as benchmarks with respect to source reconstruction performance and noise covariance estimation accuracy. Source reconstruction performance was evaluated according to the earth mover’s distance (EMD) (Rubner et al., 2000)), the error in the reconstruction of the source time courses, the average Euclidean distance (EUCL) (in mm) between each simulated source and the best (in terms of absolute correlations) matching reconstructed source, and finally F1-measure score (Chinchor & Sundheim, 1993). A detailed definition of evaluation metrics is provided in Appendix F. To evaluate the accuracy of the noise covariance matrix estimation, the following two metrics were calculated: the Pearson correlation between the original and reconstructed noise covariance matrices, Λ and Λ̂, denoted by Λsim, and the normalized mean squared error (NMSE) between Λ and Λ̂, defined as NMSE = ||Λ̂ − Λ||2F /||Λ||2F . Note that NMSE measures the reconstruction of the true scale of the noise covariance matrix, while Λsim is scale-invariant and hence only quantifies the overall structural similarity between simulated and estimated noise covariance matrices. Each simulation was carried out 100 times using different instances of X and E, and the mean and standard error of the mean (SEM) of each performance measure across repetitions was calculated. Convergence of the optimization programs for each run was defined if the relative change of the Frobenius-norm of the reconstructed sources between subsequent iterations was less than 10−8. A maximum of 1000 iterations was carried out if no convergence was reached beforehand.
Figure 1 shows two simulated datasets with five active sources in presence of full-structure noise (upper panel) as well as heteroscedastic noise (lower panel) at 0 (dB) SNR. Topographic maps depict the locations of the ground-truth active brain sources (first column) along with the source reconstruction result of three noise learning schemes assuming noise with homoscedastic (second column), heteroscedastic (third column), and full (fourth column) structure. For each algorithm, the estimated noise covariance matrix is also plotted above the topographic map. Source reconstruction performance was measured in terms of EMD and time course correlation (Corr), and is summarized in the table next to each panel. Besides, the accuracy of the noise covariance matrix reconstruction was measured on terms of Λsim and NMSE. Results are included in the same table. Figure 1 (upper panel) allows for a direct comparison of the estimated noise covariance matrices obtained from the three different noise learning schemes. It can be seen that FUN learning can better capture the overall structure of ground truth full-structure noise as evidenced by lower NMSE and similarity errors compared to the heteroscedastic and homoscedastic algorithm variants that are only able to recover a diagonal matrix while enforcing the off-diagonal elements to zero. This behaviour results in higher spatial and temporal accuracy (lower EMD and time course error) for FUN learning compared to competing algorithms assuming diagonal noise covariance. This advantage is also visible in the topographic maps. The lower-panel of Figure 1 presents analogous results for the setting where the noise covariance is generated according to a heteroscedastic model. Note that the superior spatial and temporal reconstruction performance of the heteroscedastic noise learning algorithm compared to the full-structure scheme is expected here because the simulated ground truth noise is indeed heteroscedastic. The full-structure noise learning approach, however, provides fairly reasonable performance in terms of EMD, time course correlation (corr), and Λsim, although it is designed to estimate a full-structure noise covariance matrix. The convergence behaviour of all three noise learning variants is also illustrated in Figure 1. Note that the full-structure noise learning approach eventually reaches lower negative log-likelihood values in both scenarios, namely full-structure and heteroscedastic noise.
Figure 2 shows the EMD, the time course reconstruction error, the EUCL and the F1 measure score incurred by three different noise learning approaches assuming homoscedastic (red), heteroscedastic
(green) and full-structure (blue) noise covariances for a range of 10 SNR values. The upper panel represents the evaluation metrics for the setting where the noise covariance is full-structure model, while the lower-panel depicts the same metric for simulated noise with heteroscedastic diagonal covariance. Concerning the first setting, FUN learning consistently outperforms its homoscedastic and heteroscedastic counterparts according to all evaluation metrics in particular in low-SNR settings. Consequently, as the SNR decreases, the gap between FUN learning and the two other variants increases. Conversely, heteroscedastic noise learning shows an improvement over FUN learning according to all evaluation metrics when the simulated noise is indeed heteroscedastic. However, note that the magnitude of this improvement is not as large as observed for the setting where the noise covariance is generated according to a full-structure model and then is estimated using the FUN approach.
Analysis of Auditory Evoked Fields (AEF): Figure 3 shows the reconstructed sources of the Auditory Evoked Fields (AEF) versus number of trials from a single representative subject using FUN learning algorithm. Further details on this dataset can be found in Appendix G. We tested the reconstruction performance of FUN learning with the number of trials limited to 1, 2, 12, 63 and 120. Each reconstruction was performed 30 times with the specific trials themselves chosen as a random subset of all available trials. As the subplots for different trials demonstrate, FUN learning algorithm is able to correctly localize bilateral auditory activity to Heschel’s gyrus, which is the characteristic location of the primary auditory cortex, under a few trials or even a single trial.
5 DISCUSSION
This paper focused on sparse regression within the hierarchical Bayesian regression framework and its application in EEG/MEG brain source imaging. To this end we developed an algorithm, which is, however, suitable for a much wider range of applications. What is more, the same concepts used here for full-structure noise learning could be employed in other contexts where hyperparameters like kernel widths in Gaussian process regression (Wu et al., 2019) or dictionary elements in the dictionary learning problem (Dikmen & Févotte, 2012) are to be inferred. Besides, using FUN learning algorithm may also prove useful for practical scenarios in which model residuals are expected to be correlated, e.g., probabilistic canonical correlation analysis (CCA) (Bach & Jordan, 2005), spectral independent component analysis (ICA) (Ablin et al., 2020), wireless communication (Prasad et al., 2015; Gerstoft et al., 2016; Haghighatshoar & Caire, 2017; Khalilsarai et al., 2020), robust portfolio optimization in finance (Feng et al., 2016), graph learning (Kumar et al., 2020), thermal field reconstruction (Flinth & Hashemi, 2018), and brain functional imaging (Wei et al., 2020).
Noise learning has also attracted attention in functional magnetic resonance imaging (fMRI) (Cai et al., 2016; Shvartsman et al., 2018; Cai et al., 2019b; 2020b; Wei et al., 2020), where various models like matrix-normal (MN), factor analysis (FA), and Gaussian-process (GP) regression have been proposed. The majority of the noise learning algorithms in the fMRI literature rely on the EM framework, which is quite slow in practice and has convergence guarantees only under certain strong conditions. In contrast to these existing approaches, our proposed framework not only applies to the models considered in these papers, but also benefits from theoretically proven convergence guarantees. To be more specific, we showed in this paper that FUN learning is an instance of the wider class of majorization-minimization (MM) framework, for which provable fast convergence is guaranteed. It is worth emphasizing our contribution within the MM optimization context as well. In many MM implementations, surrogate functions are minimized using an iterative approach. Our proposed algorithm, however, obtains a closed-form solution for the surrogate function in each step, which further advances its efficiency.
In the context of BSI, Engemann & Gramfort (2015) proposed a method for selecting a single regularization parameter based on cross-validation and maximum-likelihood estimation, while Huizenga et al. (2002); De Munck et al. (2002); Bijma et al. (2003); De Munck et al. (2004); Ahn & Jun (2011); Jun et al. (2006) and Plis et al. (2006) assume more complex spatiotemporal noise covariance structures. A common limitation of these works is, however, that the noise level is not estimated as part of the source reconstruction problem on task-related data but from separate noise recordings. Our proposed algorithm substantially differs in this respect, as it learns the noise covariance jointly with the brain source distribution. Note that The idea of joint estimation of brain source activity and noise covariance has been previously proposed for Type-I learning methods in (Massias et al., 2018; Bertrand et al., 2019). In contrast to these Type-I methods, FUN is a Type-II method, which learns the prior source distribution as part of the model fitting. Type-II methods have been reported to yield consistently superior results than Type-I methods (Owen et al., 2012; Cai et al., 2019a; 2020a; Hashemi et al., 2020). Our numerical results show that the same hold also for FUN learning, which performs on par or better than existing variants from the Type-II family (including conventional Champagne) in this study. We plan to provide a formal comparison of the performance of noise learning within Type-I and Type-II estimation in our future work.
While being broadly applicable, our approach is also limited by a number of factors. Although Gaussian noise distributions are commonly justified, it would be desirable to also include more robust (e.g., heavy-tailed) non-Gaussian noise distributions in our framework. Another limitation is that the superior performance of the full-structure noise learning technique comes at the expense of higher computational complexity compared to the variants assuming homoscedastic or heteroscedastic strucutre. Besides, signals in real-world scenarios often lie in a lower-dimensional space compared to the original high-dimensional ambient space due to the particular correlations that inherently exist in the structure of the data. Therefore, imposing physiologically plausible constraints on the noise model, e.g., low-rank or Toeplitz structure, not only provides side information that can be leveraged for the reconstruction but also reduces the computational cost in two ways: a) by reducing the number of parameters and b) by taking advantage of efficient implementations using circular embeddings and the fast Fourier transform (Babu, 2016). Exploring efficient ways to incorporate these structural assumptions within a Riemannian framework is another direction of future work.
6 CONCLUSION
This paper proposes an efficient optimization algorithm for jointly estimating Gaussian regression parameter distributions as well as Gaussian noise distributions with full covariance structure within a hierarchical Bayesian framework. Using the Riemannian geometry of positive definite matrices, we derived an efficient algorithm for jointly estimating source and noise covariances. The benefits of our proposed framework were evaluated within an extensive set of experiments in the context of electromagnetic brain source imaging inverse problem and showed significant improvement upon state-of-the-art techniques in the realistic scenario where the noise has full covariance structure. The performance of our method is assessed through a real data analysis for the auditory evoked field (AEF) dataset.
A PROOF OF THEOREM 1
Proof. We start the proof by recalling equation 8:
LII(Γ,Λ) = − log p(Y|Γ,Λ) = log|Σy|+ 1
T T∑ t=1 y(t)>Σ−1y y(t) . (16)
The upper bound on the log |Σy| term can be directly inferred from the concavity of the logdeterminant function and its first-order Taylor expansion around the value from the previous iteration, Σky, which provides the following inequality (Sun et al., 2017, Example 2):
log |Σy| ≤ log ∣∣Σky∣∣+ tr [(Σky)−1 (Σy −Σky)]
= log ∣∣Σky∣∣+ tr [(Σky)−1 Σy]− tr [(Σky)−1 Σky] . (17)
Note that the first and last terms in equation 17 do not depend on Γ; hence, they can be ignored in the optimization procedure. Now, we decompose Σy into two terms, each of which only contains either the noise or source covariances:
tr [( Σky )−1 Σy ] = tr [( Σky )−1 ( Λ + LΓL> )] = tr [( Σky )−1 Λ ] + tr [( Σky )−1 LΓL> ] . (18)
In next step, we decompose the second term in equation 8, 1T ∑T t=1 y(t)
>Σ−1y y(t), into two terms, each of which is a function of either only the noise or only the source covariances. To this end, we exploit the following relationship between sensor and source space covariances:
1
T T∑ t=1 y(t)>Σ−1y y(t) = 1 T T∑ t=1 xk(t)>Γ−1xk(t) + 1 T T∑ t=1 (y(t)− Lxk(t))>Λ−1(y(t)− Lxk(t)) .
(19)
By combining equation 18 and equation 19, rearranging the terms, and ignoring all terms that do not depend on Γ, we have:
LII(Γ) ≤ tr [( Σky )−1 LΓL> ] + 1
T T∑ t=1 xk(t)>Γ−1xk(t) + const
= tr( ( CkS )−1 Γ) + tr(MkSΓ −1) + const = Lconvsource(Γ,Λk) + const , (20)
where CkS = ( L> ( Σky )−1 L )−1 and MkS = 1 T ∑T t=1 x
k(t)xk(t)>. Note that constant values in equation 20 do not depend on Γ; hence, they can be ignored in the optimization procedure. This
proves the equivalence of equation 8 and equation 9 when the optimization is performed with respect to Γ.
The equivalence of equation 8 and equation 11 can be shown analogously, with the difference that we only focus on noise-related terms in equation 18 and equation 19:
LII(Λ) ≤ tr [( Σky )−1 Λ ] + 1
T T∑ t=1 (y(t)− Lxk(t))>Λ−1(y(t)− Lxk(t)) + const
= tr( ( CkN )−1 Λ) + tr(MkNΛ −1) + const = Lconvnoise(Γk,Λ) + const , (21)
where CkN = Σ k y, and M k N = 1 T ∑T t=1(y(t) − Lxk(t))(y(t) − Lxk(t))>. Constant values in equation 21 do not depend on Λ; hence, they can again be ignored in the optimization procedure. Summarizing, we have shown that optimizing equation 8 is equivalent to optimizing Lconvnoise(Γk,Λ) and Lconvsource(Γ,Λk), which concludes the proof.
B PROOF OF THEOREM 2
Before presenting the proof, the subsequent definitions and propositions are required: Definition 4 (Geodesic path). Let M be a Riemannian manifold, i.e., a differentiable manifold whose tangent space is endowed with an inner product that defines local Euclidean structures. Then, a geodesic between two points onM, denoted by p0,p1 ∈M, is defined as the shortest connecting path between those two points along the manifold, ζl(p0,p1) ∈ M for l ∈ [0, 1], where l = 0 and l = 1 defines the starting and end points of the path, respectively.
In the current context, ζl(p0,p1) defines a geodesic curve on the positive definite (P.D.) manifold joining two P.D. matrices, P0,P1 > 0. The specific pairs of matrices we will deal with are {CkS,MkS} and {CkN,MkN}. Definition 5 (Geodesic on the P.D. manifold). Geodesics on the manifold of P.D. matrices can be shown to form a cone within the embedding space. We denote this manifold by S++. Assume two P.D. matrices P0,P1 ∈ S++. Then, for l ∈ [0, 1], the geodesic curve joining P0 to P1 is defined as (Bhatia, 2009, Chapter. 6):
ξl(P0,P1) = (P0) 1 2 ( (P0) −1/2P1(P0) −1/2 )l (P0) 1 2 l ∈ [0, 1] . (22)
Note that P0 and P1 are obtained as the starting and end points of the geodesic path by choosing l = 0 and l = 1, respectively. The midpoint of the geodesic, obtained by setting l = 12 , is called the geometric mean. Note that, according to Definition 5, the following equality holds :
ξl(Γ0,Γ1) −1 = ( (Γ0) 1/2 ( (Γ0) −1/2Γ1(Γ0) −1/2 )l (Γ0) 1/2 )−1 = ( (Γ0) −1/2 ( (Γ0) 1/2(Γ1) −1(Γ0) 1/2 )l (Γ0) −1/2 ) = ξl(Γ −1 0 ,Γ −1 1 ) . (23)
Definition 6 (Geodesic convexity). Let p0 and p1 be two arbitrary points on a subset A of a Riemannian manifoldM. Then a real-valued function f with domain A ⊂M with f : A → R is called geodesic convex (g-convex) if the following relation holds:
f (ζl(p0,p1)) ≤ lf(p0) + (1− l)f(p1) , (24) where l ∈ [0, 1] and ζ(p0,p1) denotes the geodesic path connecting two points p0 and p1 as defined in 4. Thus, in analogy to classical convexity, the function f is g-convex if every geodesic ζ(p0,p1) ofM between p0,p1 ∈ A, lies in the g-convex set A. Note that the set A ⊂M is called g-convex, if any geodesics joining an arbitrary pair of points lies completely in A. Remark 7. Note that g-convexity is a generalization of classical (linear) convexity to non-Euclidean (non-linear) geometry and metric spaces. Therefore, it is straightforward to show that all convex functions in Euclidean geometry are also g-convex, where the geodesics between pairs of matrices are simply line segments:
ζl(p0,p1) = lp0 + (1− l)p1 . (25)
For the sake of brevity, we omit a detailed theoretical introduction of g-convexity, and only borrow a result from Zadeh et al. (2016); Sra & Hosseini (2015). Interested readers are referred to Wiesel et al. (2015, Chapter 1) for a gentle introduction to this topic, and Papadopoulos (2005, Chapter. 2) Rapcsak (1991); Ben-Tal (1977); Liberti (2004); Pallaschke & Rolewicz (2013); Bonnabel & Sepulchre (2009); Moakher (2005); Sra & Hosseini (2016); Vishnoi (2018) for more in-depth technical details.
Now we are ready to state the proof, which parallels the one provided in Zadeh et al. (2016, Theorem. 3).
Proof. We only show the proof for Lconvsource(Γ,Λk). The proof for Lconvnoise(Γk,Λ) can be presented analogously; and therefore, is omitted here for brevity. We proceed in two steps. First, we limit our attention to P.D. manifolds and express equation 24 in terms of geodesic paths and functions that lie on this particular space. We then show that Lconvsource(Γ,Λk) is strictly g-convex on this specific domain. In the second step, we then derive the updates rules proposed in equation 13 and equation 14.
B.1 PART I: PROVING G-CONVEXITY OF THE MAJORIZING COST FUNCTIONS
We consider geodesics along the P.D. manifold by setting ζl(p0,p1) to ξl(Γ0,Γ1) as presented in Definition 5, and define f(.) to be f(Γ) = tr(CkSΓ) + tr(M k SΓ −1), representing the cost function Lconvsource(Γ,Λk). We now show that f(Γ) is strictly g-convex on this specific domain. For continuous functions as considered in this paper, fulfilling equation 24 for f(Γ) and ξl(Γ0,Γ1) with l = 1/2 is sufficient to prove strict g-convexity:
tr ( CkSξ1/2(Γ0,Γ1) ) + tr ( MkSξ1/2(Γ0,Γ1) −1 )
< 1 2 tr ( CkSΓ0 ) + 1 2 tr ( MkSΓ0 −1) + 1
2 tr ( CkSΓ1 ) + 1 2 tr ( MkSΓ1 −1) . (26) Given CkS ∈ S++, i.e., CkS > 0 and the operator inequality (Bhatia, 2009, Chapter. 4)
ξ1/2(Γ0,Γ1) ≺ 1
2 Γ0 +
1 2 Γ1 , (27)
we have:
tr ( CkSξ1/2(Γ0,Γ1) ) < 1 2 tr ( CkSΓ0 ) + 1 2 tr ( CkSΓ1 ) , (28)
which is derived by multiplying both sides of equation 27 with CkS followed by taking the trace on both sides.
Similarly, we can write the operator inequality for {Γ−10 ,Γ −1 1 } using equation 23 as:
ξ1/2(Γ0,Γ1) −1 = ξ1/2(Γ −1 0 ,Γ −1 1 ) ≺
1 2 Γ−10 + 1 2 Γ−11 , (29)
Multiplying both sides of equation 29 by MkS ∈ S++, and applying the trace operator on both sides leads to:
tr ( MkSξ1/2(Γ0,Γ1) −1 ) < 1 2 tr ( MkSΓ0 −1)+ 1 2 tr ( MkSΓ1 −1) . (30) Summing up equation 28 and equation 30 proves equation 26 and concludes the first part of the proof.
B.2 PART II: DETAILED DERIVATION OF THE UPDATE RULES IN EQUATIONS 13 AND 14
We now present the second part of the proof by deriving the update rules in equations 13 and 14. Since the cost function Lconvsource(Γ,Λk) is strictly g-convex, its optimal solution in the k-th iteration
is unique. More concretely, the optimum can be analytically derived by taking the derivative of equation 9 and setting the result to zero as follows:
∇Lconvsource(Γ,Λk) = ( CkS )−1 − Γ−1MkSΓ−1 = 0 , (31)
which results in
Γ ( CkS )−1 Γ = MkS . (32)
This solution is known as the Riccati equation, and is the geometric mean between CkS and M k S (Davis et al., 2007; Bonnabel & Sepulchre, 2009):
Γk+1 = (CkS) 1 2 ( (CkS) −1/2MkS(C k S) −1/2 ) 1 2 (CkS) 1 2 .
The update rule for the full noise covariance matrix can be derived analogously:
Λk+1 = (CkN) 1 2 ( (CkN) −1/2MkN(C k N) −1/2 ) 1 2 (CkN) 1 2 .
Remark 8. Note that the obtained update rules are closed-form solutions for the surrogate cost functions, equations 9 and 11, which stands in contrast to conventional majorization minimization algorithms (see section C in the appendix), which require iterative procedures in each step of the optimization.
Deriving the update rules in equation 13 and equation 14 concludes the second part of the proof of Theorem 2.
C PROOF OF THEOREM 3
In the following, we provide proof for Theorem 3 by showing that alternating update rules for Γ and Λ in equation 13 and equation 14 are guaranteed to converge to a local minimum of the Bayesian Type-II likelihood equation 8. In particular, we will prove that FUN learning is an instance of the general class of majorization-minimization (MM) algorithms, for which this property follows by construction. To this end, we first briefly review theoretical concepts behind the majorizationminimization (MM) algorithmic framework (Hunter & Lange, 2004; Razaviyayn et al., 2013; Jacobson & Fessler, 2007; Wu et al., 2010).
C.1 REQUIRED CONDITIONS FOR MAJORIZATION-MINIMIZATION ALGORITHMS
MM encompasses a family of iterative algorithms for optimizing general non-linear cost functions. The main idea behind MM is to replace the original cost function in each iteration by an upper bound, also known as majorizing function, whose minimum is easy to find. The MM class covers a broad range of common optimization algorithms such as convex-concave procedures (CCCP) and proximal methods (Sun et al., 2017, Section IV), (Mjolsness & Garrett, 1990; Yuille & Rangarajan, 2003; Lipp & Boyd, 2016). Such algorithms have been applied in various domains such as brain source imaging (Hashemi & Haufe, 2018; Bekhti et al., 2018; Cai et al., 2020a; Hashemi et al., 2020), wireless communication systems with massive MIMO technology (Masood et al., 2016; Haghighatshoar & Caire, 2017; Khalilsarai et al., 2020), and non-negative matrix factorization (Fagot et al., 2019). Interested readers are referred to Sun et al. (2017) for an extensive list of applications on MM.
The problem of minimizing a continuous function f(u) within a closed convex set U ⊂ Rn:
min u f(u) subject to u ∈ U , (33)
within the MM framwork can be summarized as follows. First, construct a continuous surrogate function g(u|uk) that majorizes, or upper-bounds, the original function f(u) and coincides with f(u) at a given point uk:
[A1] g(uk|uk) = f(uk) ∀ uk ∈ U [A2] g(u|uk) ≥ f(u) ∀ u,uk ∈ U .
Second, starting from an initial value u0, generate a sequence of feasible points u1,u2, . . . ,uk,uk+1 as solutions of a series of successive simple optimization problems, where
[A3] uk+1 := arg min u∈U g(u|uk) .
If a surrogate function fulfills conditions [A1]–[A3], then the value of the cost function f decreases in each iteration: f(uk+1) ≤ f(uk). For the smooth functions considered in this paper, we further require that the derivatives of the original and surrogate functions coincide at uk:
[A4] ∇g(uk|uk) = ∇f(uk) ∀ uk ∈ U .
We can then formulate the following theorem:
Theorem 9. Assume that an MM algorithm fulfills conditions [A1]–[A4]. Then, every limit point of the sequence of minimizers generated in [A3], is a stationary point of the original optimization problem in equation 33.
Proof. A detailed proof is provided in Razaviyayn et al. (2013, Theorem 1).
C.2 DETAIL DERIVATION OF THE PROOF OF THEOREM 3
We now show that FUN learning is an instance of majorization-minimization as defined above, which fulfills Theorem 9.
Proof. We need to prove that conditions [A1]–[A4] are fulfilled for FUN learning. To this end, we recall the upper bound on log |Σy| in equation 17, which fulfills condition [A2] since it majorizes log |Σy| as a result of the concavity of the log-determinant function and its first-order Taylor expansion around Σky. Besides, it automatically satisfies conditions [A1] and [A4] by construction, because the majorizing function in equation 17 is obtained through a Taylor expansion around Σky. Concretely, [A1] is satisfied because the equality in equation 17 holds for Σy = Σky. Similarly, [A4]
is satisfied because the gradient of log |Σy| at point Σky, ( Σky )−1 defines the linear Taylor approxi-
mation log ∣∣Σky∣∣+ tr [(Σky)−1 (Σy −Σky)]. Thus, both gradients coincide in Σky by construction.
Now, we prove that [A3] can be satisfied by showing that Lconvsource(Γ,Λk) reaches its global minimum in each MM iteration. This is guaranteed if Lconvsource(Γ,Λk) can be shown to be convex or g-convex with respect to Γ. To this end, we first require the subsequent proposition:
Proposition 10. Any local minimum of a g-convex function over a g-convex set is a global minimum.
Proof. A detailed proof is presented in Rapcsak (1991, Theorem 2.1).
Given the proof presented in appendix B.1, we can conclude that equation 20 is g-convex; hence, any local minimum ofLconvsource(Γ,Λk) is a global minimum according to Proposition 10. This proves that condition [A3] is fulfilled and completes the proof that the optimization of equation 8 with respect to Γ using the convex surrogate cost function equation 9 leads to an MM algorithm. For the sake of brevity, we omit the proof for the optimization with respect to Λ based on the convex surrogate function in equation 11, Lconvnoise(Γk,Λ), as it can be presented, analogously.
D DERIVATION OF CHAMPAGNE AS A SPECIAL CASE OF FUN LEARNING
We start the derivation of update rule equation 15 by constraining Γ to the set of diagonal matrices W: Γ = diag(γ), where γ = [γ1, . . . , γN ]>. We continue by rewriting the constrained optimization with respect to the source covariance matrix,
Γk+1 = arg min Γ∈W, Λ=Λk tr(CkSΓ) + tr(M k SΓ −1) , (34)
as follows:
γk+1 = arg min γ, Λ=Λk
diag [( CkS )−1] γ + diag [ MkS ] γ−1︸ ︷︷ ︸
Ldiagsource(γ|γk)
, (35)
where γ−1 = [γ−11 , . . . , γ −1 N ] > is defined as the element-wise inversion of γ. The optimization with respect to the scalar source variances is then carried out by taking the derivative of equation 35 with respect to γn, for n = 1, . . . , N , and setting it to zero,
∂
∂γn
([( CkS )−1] γn + [ MkS ] γ−1n ) = [( CkS )−1]
n,n − 1 (γn)2 [ MkS ] n,n
= 0 for n = 1, . . . , N ,
where Ln denotes the n-th column of the lead field matrix. This yields the following update rule
γk+1n ← √√√√√ [ MkS ] n,n[(
CkS )−1]
n,n
= √√√√ 1T ∑Tt=1(xkn(t))2 L>n ( Σky )−1 Ln for n = 1, . . . , N ,
which is identical to the update rule of Champagne (Wipf & Nagarajan, 2009).
E DERIVATION OF CHAMPAGNE WITH HETEROSCEDASTIC NOISE LEARNING
AS A SPECIAL CASE OF FUN LEARNING
Similar to Appendix D, we start by constraining Λ to the set of diagonal matricesW: Λ = diag(λ), where λ = [λ1, . . . , λM ]>. We continue by reformulating the constrained optimization with respect to the noise covariance matrix,
Λk+1 = arg min Λ∈W, Γ=Γk tr(CkNΛ) + tr(M k NΛ −1) , (36)
as follows:
λk+1 = arg min λ, Γ=Γk
diag [( CkN )−1] λ + diag [ MkN ] λ−1︸ ︷︷ ︸
Ldiagnoise(λ|λk)
, (37)
where λ−1 = [λ−11 , . . . , λ −1 M ] > is defined as the element-wise inversion of λ. The optimization with respect to the scalar noise variances then proceeds by taking the derivative of equation 37 with respect to λm, for m = 1, . . . ,M , and setting it to zero,
∂
∂λm
([( CkN )−1] λm + [ MkN ] λ−1m ) = [( CkN )−1]
m,m − 1 (λm)2 [ MkN ] m,m
= 0 for m = 1, . . . ,M .
This yields the following update rule:
λk+1m ← √√√√√ [ MkN ] m,m[(
CkN )−1]
m,m
= √√√√√√ [ 1 T ∑T t=1(y(t)− Lxk(t))(y(t)− Lxk(t))> ] m,m[(
Σky )−1]
m,m
for m = 1, . . . ,M , (38)
which is identical to the update rule of the Champagne with heteroscedastic noise learning as presented in Cai et al. (2020a).
F u
lls tr
u c tu
re N
o is e H e te ro s c e d a s ti c N o is e
Figure 4: Accuracy of the noise covariance matrix reconstruction incurred by three different noise learning approaches assuming homoscedastic (red), heteroscedastic (green) and full-structure (blue) noise covariances. The ground-truth noise covariance matrix is either full-structure (upper row) or heteroscedastic diagonal (lower row). Performance is assessed in terms of the Pearson correlation between the entries of the original and reconstructed noise covariance matrices, Λ and Λ̂, denoted by Λsim (left column). Shown is the similarity error 1 − Λsim. Further, the normalized mean squared error (NMSE) between Λ and Λ̂, defined as NMSE = ||Λ̂−Λ||2F /||Λ||2F is reported (right column).
F PSEUDO-EEG SIGNAL GENERATION
Our simulation setting is an adoption of the EEG inverse problem, where brain activity is to be reconstructed from simulated pseudo-EEG data (Haufe & Ewald, 2016).
Forward Modeling: Populations of pyramidal neurons in the cortical gray matter are known to be the main drivers of the EEG signal (Hämäläinen et al., 1993; Baillet et al., 2001). Here, we use a realistic volume conductor model of the human head to model the linear relationship between primary electrical source currents generated within these populations and the resulting scalp surface potentials captured by EEG electrodes. The lead field matrix, L ∈ R58×2004, was generated using the New York Head model (Huang et al., 2016) taking into account the realistic anatomy and electrical tissue conductivities of an average human head. In this model, 2004 dipolar current sources were placed evenly on the cortical surface and 58 sensors were considered. The lead field matrix, L ∈ R58×2004 was computed using the finite element method. Note that the orientation of all source currents was fixed to be perpendicular to the cortical surface, so that only scalar source amplitudes needed to be estimated.
Evaluation Metrics: Source reconstruction performance was evaluated according to the following metrics. First, the earth mover’s distance (EMD) (Rubner et al., 2000; Haufe et al., 2008)) was used to quantify the spatial localization accuracy. The EMD measures the cost needed to transform two probability distributions defined on the same metric domain (in this case, distributions of the true and estimated sources defined in 3D Euclidean brain space) into each other. EMD scores were normalized to [0, 1]. Second, the error in the reconstruction of the source time courses was measured. To this end, Pearson correlation between all pairs of simulated and reconstructed (i.e., those with non-zero activations) source time courses was assessed as the mean of the absolute correlations obtained for each source, after optimally matching simulated and reconstructed sources based on maximal absolute correlation. We also report another metric for evaluating the localization error as the average Euclidean distance (EUCL) (in mm) between each simulated source and the best (in terms of absolute correlations) matching reconstructed source. For assessing the recovery of the true support, we also compute F1-measure scores (Chinchor & Sundheim, 1993; van Rijsbergen, 1979): F1 = 2×TP/P+TP+FP , where P denotes the number of true active sources, while TP and FP are the numbers of true and false positive predictions. Note that perfect support recovery, i.e., F1 = 1, is only achieved when there is a perfect correspondence between ground-truth and estimated support.
To evaluate the accuracy of the noise covariance matrix estimation, the following two metrics were calculated: the Pearson correlation between the original and reconstructed noise covariance matrices, Λ and Λ̂, denoted by Λsim, and the normalized mean squared error (NMSE) between Λ and Λ̂, defined as: NMSE = ||Λ̂ − Λ||2F /||Λ||2F . Similarity error was then defined as one minus the Pearson correlation: 1 − Λsim. Note that NMSE measures the reconstruction of the true scale of the noise covariance matrix, while Λsim is scale-invariant and hence only quantifies the overall structural similarity between simulated and estimated noise covariance matrices.
Evaluating the accuracy of the noise covariance matrix estimation: Figure 4 depicts the accuracy with which the covariance matrix is reconstructed by three different noise learning approaches assuming noise with homoscedastic (red), heteroscedastic (green) and full (blue) structure. The ground truth noise covariance matrix either had full (upper row) or heteroscedastic (lower row) structure. Performance was measured in terms of similarity error and NMSE. Similar to the trend observed in Figure 2, full-structure noise learning leads to better noise covariance estimation accuracy (lower NMSE and similariy error) for the full-structure noise model, while superior reconstruction performance is achieved for heteroscedastic noise learning when true noise covariance is heteroscedastic.
G FURTHER DETAILS ON AUDITORY EVOKED FIELDS (AEF) DATASET
The MEG data used in this article were acquired in the Biomagnetic Imaging Laboratory at the University of California San Francisco (UCSF) with a CTF Omega 2000 whole-head MEG system from VSM MedTech (Coquitlam, BC, Canada) with 1200 Hz sampling rate. The lead field for each subject was calculated with NUTMEG (Dalal et al., 2004) using a single-sphere head model (two spherical orientation lead fields) and an 8 mm voxel grid. Each column was normalized to have a norm of unity. The neural responses of one subject to an Auditory Evoked Fields (AEF) stimulus were localized. The AEF response was elicited with single 600 ms duration tones (1 kHz) presented binaurally. 120 trials were collected for AEF dataset. The data were first digitally filtered from 1 to 70 Hz to remove artifacts and DC offset, time-aligned to the stimulus, and then averaged across the following number of trials:{1,2,12, 63,120}. The pre-stimulus window was selected to be 100 ms to 5 ms and the post-stimulus time window was selected to be 60 ms to 180 ms, where 0 ms is the onset of the tone (Wipf et al., 2010; Dalal et al., 2011; Owen et al., 2012; Cai et al., 2019a). | 1. What is the focus of the paper regarding hierarchical Bayesian models for EEG signals?
2. What are the strengths and weaknesses of the proposed methodology, particularly in its ability to handle correlated and heteroskedastic noise?
3. How does the reviewer assess the novelty and sufficiency of the proposed optimization mechanism?
4. What are the limitations of the experimental treatment, specifically in terms of the use of synthetic data and the lack of real-data experiments?
5. Do you have any concerns about the accuracy and validation of the paper's claims, especially regarding its discussion section? | Review | Review
The authors propose a methodology for type-II maximum likelihood on a hierarchical Bayesian model for EEG signals. The particular feature of the model, which separates it from other EEG models, is the consideration of a full covariance matrix which makes the noise correlated and heteroskedastic.
The model, as claimed by the authors, is fully Gaussian and therefore tractable. As a consequence, the inference poses no challenges other than the computational complexity. To address this, the authors propose a mechanism for, what they claim is, efficient optimisation. This contribution alone is not sufficient (over the standard literature) for publication as a theoretical improvement.
Given the lack of a theoretical advancement, I was hoping that the contribution of the article came in the experimental treatment, however, it was not the case. A single set of experiments using synthetic data was considered, where the proposed method was compared against other benchmark. It is far form surprising when the authors deal with exact inference on a model where the observations where produced under the same statistical assumptions.
I also would like to emphasise that the discussion of the paper states that "This paper proposes an efficient optimization algorithm for jointly estimating...." and "The benefits of our proposed framework were evaluated within an extensive set of experiments ". None of these claims are true or at least they not validated by any supporting evidence in the paper.
Perhaps with the stated future work and stronger experimental results (real data), this paper can be improved. |
ICLR | Title
Joint Learning of Full-structure Noise in Hierarchical Bayesian Regression Models
Abstract
We consider hierarchical Bayesian (type-II maximum likelihood) models for observations with latent variables for source and noise, where both hyperparameters need to be estimated jointly from data. This problem has application in many domains in imaging including biomagnetic inverse problems. Crucial factors influencing accuracy of source estimation are not only the noise level but also its correlation structure, but existing approaches have not addressed estimation of noise covariance matrices with full structure. Here, we consider the reconstruction of brain activity from electroencephalography (EEG). This inverse problem can be formulated as a linear regression with independent Gaussian scale mixture priors for both the source and noise components. As a departure from classical sparse Bayesan learning (SBL) models where across-sensor observations are assumed to be independent and identically distributed, we consider Gaussian noise with full covariance structure. Using Riemannian geometry, we derive an efficient algorithm for updating both source and noise covariance along the manifold of positive definite matrices. Using the majorization-maximization framework, we demonstrate that our algorithm has guaranteed and fast convergence. We validate the algorithm both in simulations and with real data. Our results demonstrate that the novel framework significantly improves upon state-of-the-art techniques in the real-world scenario where the noise is indeed non-diagonal and fully-structured.
1 INTRODUCTION
Having precise knowledge of the noise distribution is a fundamental requirement for obtaining accurate solutions in many regression problems (Bungert et al., 2020). In many applications however, it is impossible to separately estimate this noise distribution, as distinct ”noise-only” (baseline) measurements are not feasible. An alternative, therefore, is to design estimators that jointly optimize over the regression coefficients as well as over parameters of the noise distribution. This has been pursued both in a (penalized) maximum-likelihood settings (here referred to as Type-I approaches) (Petersen & Jung, 2020; Bertrand et al., 2019; Massias et al., 2018) as well as in hierarchical Bayesian settings (referred to as Type-II) (Wipf & Rao, 2007; Zhang & Rao, 2011; Hashemi et al., 2020; Cai et al., 2020a). Most contributions in the literature are, however, limited to the estimation of only a diagonal noise covariance (i.e., independent between different measurements) (Daye et al., 2012; Van de Geer et al., 2013; Dalalyan et al., 2013; Lederer & Muller, 2015). Considering a diagonal noise covariance is a limiting assumption in practice as the noise interference in many realistic scenarios are highly correlated across measurements; and thus, have non-trivial off-diagonal elements.
This paper develops an efficient optimization algorithm for jointly estimating the posterior of regression parameters as well as the noise distribution. More specifically, we consider linear regression with Gaussian scale mixture priors on the parameters and a full-structure multivariate Gaussian noise. We cast the problem as a hierarchical Bayesian (type-II maximum-likelihood) regression problem, in which the variance hyperparameters and the noise covariance matrix are optimized by maximizing the Bayesian evidence of the model. Using Riemannian geometry, we derive an efficient algorithm for jointly estimating the source and noise covariances along the manifold of positive definite (P.D.) matrices.
To highlight the benefits of our proposed method in practical scenarios, we consider the problem of electromagnetic brain source imaging (BSI). The goal of BSI is to reconstruct brain activity
from magneto- or electroencephalography (M/EEG), which can be formulated as a sparse Bayesian learning (SBL) problem. Specifically, it can be cast as a linear Bayesian regression model with independent Gaussian scale mixture priors on the parameters and noise. As a departure from the classical SBL approaches, here we specifically consider Gaussian noise with full covariance structure. Prominent source of correlated noise in this context are, for example, eye blinks, heart beats, muscular artifacts and line noise. Other realistic examples for the need for such full-structure noise can be found in the areas of array processing (Li & Nehorai, 2010) or direction of arrival (DOA) estimation (Chen et al., 2008). Algorithms that can accurately estimate noise with full covariance structure are expected to achieve more accurate regression models and predictions in this setting.
2 TYPE-II BAYESIAN REGRESSION
We consider the linear model Y = LX + E, in which a forward or design matrix, L ∈ RM×N , is mapped to the measurements, Y, by a set of coefficients or source components, X. Depending on the setting, the problem of estimating X given L and Y is called an inverse problem in physics, a multitask regression problem in machine learning, or a multiple measurement vector (MMV) recovery problem in signal processing (Cotter et al., 2005). Adopting a signal processing terminology, the measurement matrix Y ∈ RM×T captures the activity of M sensors at T time instants, y(t) ∈ RM×1, t = 1, . . . , T , while the source matrix, X ∈ RN×T , consists of the unknown activity of N sources at the same time instants, x(t) ∈ RN×1, t = 1, . . . , T . The matrix E = [e(1), . . . , e(T )] ∈ RM×T represents T time instances of zero-mean Gaussian noise with full covariance Λ, e(t) ∈ RM×1 ∼ N (0,Λ), t = 1, . . . , T , which is assumed to be independent of the source activations. In this paper, we focus on M/EEG based brain source imaging (BSI) but the proposed algorithm can be used in general regression settings, in particular for sparse signal recovery (Candès et al., 2006; Donoho, 2006) with a wide range of applications (Malioutov et al., 2005). The goal of BSI is to infer the underlying brain activity X from the EEG/MEG measurement Y given a known forward operator, called lead field matrix L. As the number of sensors is typically much smaller than the number of locations of potential brain sources, this inverse problem is highly ill-posed. This problem is addressed by imposing prior distributions on the model parameters and adopting a Bayesian treatment. This can be performed either through Maximum-a-Posteriori (MAP) estimation (Type-I Bayesian learning) (Pascual-Marqui et al., 1994; Gorodnitsky et al., 1995; Haufe et al., 2008; Gramfort et al., 2012; Castaño-Candamil et al., 2015) or, when the model has unknown hyperparameters, through Type-II Maximum-Likelihood estimation (Type-II Bayesian learning) (Mika et al., 2000; Tipping, 2001; Wipf & Nagarajan, 2009; Seeger & Wipf, 2010; Wu et al., 2016).
In this paper, we focus on Type-II Bayesian learning, which assumes a family of prior distributions p(X|Θ) parameterized by a set of hyperparameters Θ. These hyper-parameters can be learned from the data along with the model parameters using a hierarchical Bayesian approach (Tipping, 2001; Wipf & Rao, 2004) through the maximum-likelihood principle:
ΘII := arg max Θ p(Y|Θ) = arg max Θ
∫ p(Y|X,Θ)p(X|Θ)dX . (1)
Here we assume a zero-mean Gaussian prior with full covariance Γ for the underlying source distribution, x(t) ∈ RN×1 ∼ N (0,Γ), t = 1, . . . , T . Just as most other approaches, Type-II Bayesian learning makes the simplifying assumption of statistical independence between time samples. This leads to the following expression for the distribution of the sources and measurements:
p(X|Γ) = T∏ t=1 p(x(t)|Γ) = T∏ t=1 N (0,Γ) (2)
p(Y|X) = T∏ t=1 p(y(t)|x(t)) = T∏ t=1 N (Lx(t),Λ) . (3)
The parameters of the Type-II model, Θ, are the unknown source and noise covariances, i.e., Θ = {Γ,Λ}. The unknown parameters Γ and Λ are optimized based on the current estimates of the source and noise covariances in an alternating iterative process. Given initial estimates of Γ and Λ,
the posterior distribution of the sources is a Gaussian of the form (Sekihara & Nagarajan, 2015)
p(X|Y,Γ) = T∏ t=1 N (µx(t),Σx) ,where (4)
µx(t) = ΓL >(Σy) −1y(t) (5)
Σx = Γ− ΓL>(Σy)−1LΓ (6) Σy = Λ + LΓL
> . (7) The estimated posterior parameters µx(t) and Σx are then in turn used to update Γ and Λ as the minimizers of the negative log of the marginal likelihood p(Y|Γ,Λ), which is given by (Wipf et al., 2010):
LII(Γ,Λ) = − log p(Y|Γ,Λ) = log|Σy|+ 1
T T∑ t=1 y(t)>Σ−1y y(t)
= log|Λ + LΓL>|+ 1 T T∑ t=1 y(t)> ( Λ + LΓL> )−1 y(t) , (8)
where | · | denotes the determinant of a matrix. This process is repeated until convergence. Given the final solution of the hyperparameters ΘII = {ΓII,ΛII}, the posterior source distribution is obtained by plugging these estimates into equations 3 to 6.
3 PROPOSED METHOD: FULL-STRUCTURE NOISE (FUN) LEARNING
Here we propose a novel and efficient algorithm, full-structure noise (FUN) learning, which is able to learn the full covariance structure of the noise jointly within the Bayesian Type-II regression framework. We first formulate the algorithm in its most general form, in which both the noise distribution and the prior have full covariance structure. Later, we make the simplifying assumption of independent source priors, leading to the pruning of the majority of sources. This effect, which has also been referred to as automatic relevance determination (ARD) or sparse Bayesian learning (SBL) is beneficial in our application of interest, namely the reconstruction of parsimonious sets of brain sources underlying experimental EEG measurements.
Note that the Type-II cost function in equation 8 is non-convex and thus non-trivial to optimize. A number of iterative algorithms such as majorization-minimization (MM) (Sun et al., 2017) have been proposed to address this challenge. Following the MM scheme, we first construct convex surrogate functions that majorizes LII(Γ,Λ) in each iteration of the optimization algorithm. Then, we show the minimization equivalence between the constructed majoring functions and equation 8. This result is presented in the following theorem: Theorem 1. Let Λk and Σky be fixed values obtained in the (k)-th iteration of the optimization algorithm minimizing LII(Γ,Λ). Then, optimizing the non-convex type-II ML cost function in equation 8, LII(Γ,Λ), with respect to Γ is equivalent to optimizing the following convex function, which majorizes equation 8:
Lconvsource(Γ,Λk) = tr( ( CkS )−1 Γ) + tr(MkSΓ −1) , (9)
where CkS and M k S are defined as:
CkS := ( L> ( Σky )−1 L )−1 , MkS := 1
T T∑ t=1 xk(t)xk(t)> . (10)
Similarly, optimizing LII(Γ,Λ) with respect to Λ is equivalent to optimizing the following convex majorizing function:
Lconvnoise(Γk,Λ) = tr( ( CkN )−1 Λ) + tr(MkNΛ −1) , (11)
where CkN and M k N are defined as:
CkN := ( Σky ) , MkN := 1
T T∑ t=1 (y(t)− Lxk(t))(y(t)− Lxk(t))> . (12)
Proof. The proof is presented in Appendix A.
We continue by considering the optimization of the cost functionsLconvsource(Γ,Λk) andLconvnoise(Γk,Λ) with respect to Γ and Λ, respectively. Note that in case of source covariances with full structure, the solution of Lconvsource(Γ,Λk) with respect to Γ lies in the (N2 − N)/2 Riemannian manifold of positive definite (P.D.) matrices. This consideration enables us to invoke efficient methods from Riemannian geometry (see Petersen et al., 2006; Berger, 2012; Jost & Jost, 2008), which ensures that the solution at each step of the optimization is contained within the lower-dimensional solution space. Specifically, in order to optimize for the source covariance, the algorithm calculates the geometric mean between the previously obtained statistical model source covariance, CkS, and the source-space sample covariance matrix, MkS, in each iteration. Analogously, to update the noise covariance estimate, the algorithm calculates the geometric mean between the model noise covariance, CkN, and the empirical sensor-space residuals, M k N. The update rules obtained from this algorithm are presented in the following theorem:
Theorem 2. The cost functions Lconvsource(Γ,Λk) and Lconvnoise(Γk,Λ) are both strictly geodesically convex with respect to the P.D. manifold, and their optimal solution with respect to Γ and Λ, respectively, can be attained according to the two following update rules:
Γk+1 ← (CkS) 1 2 ( (CkS) −1/2MkS(C k S) −1/2 ) 1 2 (CkS) 1 2 , (13)
Λk+1 ← (CkN) 1 2 ( (CkN) −1/2MkN(C k N) −1/2 ) 1 2 (CkN) 1 2 . (14)
Proof. A detailed proof can be found in Appendix B.
Convergence of the resulting algorithm is shown in the following theorem.
Theorem 3. Optimizing the non-convex type-II ML cost function in equation 8, LII(Γ,Λ) with alternating update rules for Γ and Λ in equation 13 and equation 14 leads to an MM algorithm with guaranteed convergence guarantees.
Proof. A detailed proof can be found in Appendix C.
While Theorems 1–3 reflect a general joint learning algorithm, the assumption of sources with full covariance structure is often relaxed in practice. The next section will shed light on this important simplification by making a formal connection to SBL algorithms.
3.1 SPARSE BAYESIAN LEARNING WITH FULL NOISE MODELING
In brain source imaging, the assumption of full source covariance is often relaxed. Even if, technically, most parts of the brain are active at all times, and the concurrent activations of different brain regions can never be assumed to be fully uncorrelated, there are many experimental settings in which it is reasonable to assume only a small set of independent brain sources. Such sparse solutions are physiologically plausible in task-based analyses, where only a fraction of the brain’s macroscopic structures is expected to be consistently engaged. A common strategy in this case is to model independent sources through a diagonal covariance matrix. In the Type-II Bayesian learning framework, this simplification interestingly leads to sparsity of the resulting source distributions, as, at the optimum, many of the estimated source variances are zero. This mechanism is known as sparse Bayesian learning and is closely related to the more general concept of automatic relevance determination. Here, we adopt the SBL assumption for the sources, leading to Γ-updates previously described in the BSI literature under the name Champagne (Wipf & Nagarajan, 2009). As a novelty and main focus of this paper, we here equip the SBL framework with the capability to jointly learn full noise covariances through the geometric mean based update rule in equation 14. In the SBL framework, the N modeled brain sources are assumed to follow independent univariate Gaussian distributions with zero mean and distinct unknown variances γn: xn(t) ∼ N (0, γn), n = 1, . . . , N . In the SBL solution, the majority of variances is zero, thus effectively inducing spatial sparsity of the corresponding source activities. For FUN learning, we also impose a diagonal structure on the source covariance matrix, Γ = diag(γ), where γ = [γ1, . . . , γN ]>. By constraining Γ in equation 9
Algorithm 1: Full-structure noise (FUN) learning Input: The lead field matrix L ∈ RM×N and the measurement vectors y(t) ∈ RM×1, t = 1, . . . , T . Result: The estimated prior source variances [γ1, . . . , γN ]>, noise covariance Λ, the posterior
mean µx(t) and covariance Σx of the sources. 1 Set a random initial value for Λ as well as γ = [γ1, . . . , γN ]>, and construct Γ = diag(γ). 2 Calculate the statistical covariance Σy = Λ + LΓL>.
Repeat 3 Calculate the posterior mean as µx(t) = ΓL>(Σy)−1y(t). 4 Calculate CkS and M k S based on equation 10, and update γn for n = 1, . . . , N based on equation 15. 5 Calculate CkN and M k N based on equation 12, and update Λ based on equation 14.
Until stopping condition is satisfied; 6 Calculate the posterior covariance as Σx = Γ− ΓL>(Σy)−1LΓ.
to the set of diagonal matrices, W , we can show that the update rule equation 13 for the source variances simplifies to the following form:
γk+1n ← √√√√√ [ MkS ] n,n[(
CkS )−1]
n,n
= √√√√ 1T ∑Tt=1(xkn(t))2 L>n ( Σky )−1 Ln for n = 1, . . . , N , (15)
where Ln denotes the n-th column of the lead field matrix. Interestingly, equation 15 is identical to the update rule of the Champagne algorithm. A detailed derivation of equation 15 can be found in Appendix D.
Summarizing, the FUN learning approach, just like Champagne and other SBL algorithms, assumes independent Gaussian sources with individual variances (thus, diagonal source covariances), which are updated through equation equation 15. Departing from the classical SBL setting, which assumes the noise distribution to be known, FUN models noise with full covariance structure, which is updated using equation 14. Algorithm 1 summarizes the used update rules.
Note that various recent Type-II noise learning schemes for diagonal noise covariance matrices (Hashemi et al., 2020; Cai et al., 2020a) that are rooted in the concept of SBL can be also derived as special cases of FUN learning assuming diagonal source and noise covariances, i.e., Γ,Λ ∈ W . Specifically imposing diagonal structure on the noise covariance matrix for the FUN algorithm, Λ, results in identical noise variance update rules as derived in Cai et al. (2020a) for heteroscedastic, and in Hashemi et al. (2020) for homoscedastic noise. We explicitly demonstrate this connection in Appendix E. Here, we note that heteroscedasticity refers to the common phenomenon that measurements are contaminated with non-uniform noise levels across channels, while homoscedasticity only accounts for uniform noise levels.
4 NUMERICAL SIMULATIONS AND REAL DATA ANALYSIS
Source, Noise and Forward Model: We simulated a sparse set of N0 = 5 active brain sources that were placed at random positions on the cortex. To simulate the electrical neural activity of these sources, T = 200 identically and independently distributed (i.i.d) points were sampled from a Gaussian distribution, yielding sparse source activation vectors x(t). The resulting source distribution, represented as X = [x(1), . . . ,x(T )], was projected to the EEG sensors through application of lead field matrix as the forward operator: Ysignal = LX. The lead field matrix, L ∈ R58×2004, was generated using the New York Head model (Huang et al., 2016) taking into account the realistic anatomy and electrical tissue conductivities of an average human head. Further details regarding forward modeling is provided in Appendix F. Gaussian additive noise was randomly sampled from a zero-mean normal distribution with full covariance matrix Λ: e(t) ∈ RM×1 ∼ N (0,Λ), t = 1, . . . , T . This setting is further referred to as full-structure noise. Note that we also generated noise with diagonal covariance matrix, referred to as heteroscedastic noise, in order to investigate the effect of model violation on reconstruction performance. The
noise matrix E = [e(1), . . . , e(T )] ∈ RM×T was normalized by it Frobenius norm and added to the signal matrix Ysignal as follows: Y = Ysignal + ( (1−α)‖Ysignal‖ F/α‖E‖F ) E, where α determines the signal-to-noise ratio (SNR) in sensor space. Precisely, SNR is obtained as follows: SNR = 20log10 (α/1−α). In the subsequently described experiments the following values of α were used: α={0.3, 0.35, 0.4, 0.45, 0.5, 0.55, 0.65, 0.7, 0.8}, which correspond to the following SNRs: SNR={-12, -7.4, -5.4, -3.5, -1.7, 0, 1.7, 3.5, 5.4, 7.4, 12} (dB). MATLAB codes for producing the results in the simulation study are uploaded here.
Evaluation Metrics and Simulation Set-up: We applied the full-structure noise learning approach on the synthetic datasets described above to recover the locations and time courses of the active brain sources. In addition to our proposed approach, two further Type-II Bayesian learning schemes, namely Champagne with homo- and heteroscedastic noise learning (Hashemi et al., 2020; Cai et al., 2020a), were also included as benchmarks with respect to source reconstruction performance and noise covariance estimation accuracy. Source reconstruction performance was evaluated according to the earth mover’s distance (EMD) (Rubner et al., 2000)), the error in the reconstruction of the source time courses, the average Euclidean distance (EUCL) (in mm) between each simulated source and the best (in terms of absolute correlations) matching reconstructed source, and finally F1-measure score (Chinchor & Sundheim, 1993). A detailed definition of evaluation metrics is provided in Appendix F. To evaluate the accuracy of the noise covariance matrix estimation, the following two metrics were calculated: the Pearson correlation between the original and reconstructed noise covariance matrices, Λ and Λ̂, denoted by Λsim, and the normalized mean squared error (NMSE) between Λ and Λ̂, defined as NMSE = ||Λ̂ − Λ||2F /||Λ||2F . Note that NMSE measures the reconstruction of the true scale of the noise covariance matrix, while Λsim is scale-invariant and hence only quantifies the overall structural similarity between simulated and estimated noise covariance matrices. Each simulation was carried out 100 times using different instances of X and E, and the mean and standard error of the mean (SEM) of each performance measure across repetitions was calculated. Convergence of the optimization programs for each run was defined if the relative change of the Frobenius-norm of the reconstructed sources between subsequent iterations was less than 10−8. A maximum of 1000 iterations was carried out if no convergence was reached beforehand.
Figure 1 shows two simulated datasets with five active sources in presence of full-structure noise (upper panel) as well as heteroscedastic noise (lower panel) at 0 (dB) SNR. Topographic maps depict the locations of the ground-truth active brain sources (first column) along with the source reconstruction result of three noise learning schemes assuming noise with homoscedastic (second column), heteroscedastic (third column), and full (fourth column) structure. For each algorithm, the estimated noise covariance matrix is also plotted above the topographic map. Source reconstruction performance was measured in terms of EMD and time course correlation (Corr), and is summarized in the table next to each panel. Besides, the accuracy of the noise covariance matrix reconstruction was measured on terms of Λsim and NMSE. Results are included in the same table. Figure 1 (upper panel) allows for a direct comparison of the estimated noise covariance matrices obtained from the three different noise learning schemes. It can be seen that FUN learning can better capture the overall structure of ground truth full-structure noise as evidenced by lower NMSE and similarity errors compared to the heteroscedastic and homoscedastic algorithm variants that are only able to recover a diagonal matrix while enforcing the off-diagonal elements to zero. This behaviour results in higher spatial and temporal accuracy (lower EMD and time course error) for FUN learning compared to competing algorithms assuming diagonal noise covariance. This advantage is also visible in the topographic maps. The lower-panel of Figure 1 presents analogous results for the setting where the noise covariance is generated according to a heteroscedastic model. Note that the superior spatial and temporal reconstruction performance of the heteroscedastic noise learning algorithm compared to the full-structure scheme is expected here because the simulated ground truth noise is indeed heteroscedastic. The full-structure noise learning approach, however, provides fairly reasonable performance in terms of EMD, time course correlation (corr), and Λsim, although it is designed to estimate a full-structure noise covariance matrix. The convergence behaviour of all three noise learning variants is also illustrated in Figure 1. Note that the full-structure noise learning approach eventually reaches lower negative log-likelihood values in both scenarios, namely full-structure and heteroscedastic noise.
Figure 2 shows the EMD, the time course reconstruction error, the EUCL and the F1 measure score incurred by three different noise learning approaches assuming homoscedastic (red), heteroscedastic
(green) and full-structure (blue) noise covariances for a range of 10 SNR values. The upper panel represents the evaluation metrics for the setting where the noise covariance is full-structure model, while the lower-panel depicts the same metric for simulated noise with heteroscedastic diagonal covariance. Concerning the first setting, FUN learning consistently outperforms its homoscedastic and heteroscedastic counterparts according to all evaluation metrics in particular in low-SNR settings. Consequently, as the SNR decreases, the gap between FUN learning and the two other variants increases. Conversely, heteroscedastic noise learning shows an improvement over FUN learning according to all evaluation metrics when the simulated noise is indeed heteroscedastic. However, note that the magnitude of this improvement is not as large as observed for the setting where the noise covariance is generated according to a full-structure model and then is estimated using the FUN approach.
Analysis of Auditory Evoked Fields (AEF): Figure 3 shows the reconstructed sources of the Auditory Evoked Fields (AEF) versus number of trials from a single representative subject using FUN learning algorithm. Further details on this dataset can be found in Appendix G. We tested the reconstruction performance of FUN learning with the number of trials limited to 1, 2, 12, 63 and 120. Each reconstruction was performed 30 times with the specific trials themselves chosen as a random subset of all available trials. As the subplots for different trials demonstrate, FUN learning algorithm is able to correctly localize bilateral auditory activity to Heschel’s gyrus, which is the characteristic location of the primary auditory cortex, under a few trials or even a single trial.
5 DISCUSSION
This paper focused on sparse regression within the hierarchical Bayesian regression framework and its application in EEG/MEG brain source imaging. To this end we developed an algorithm, which is, however, suitable for a much wider range of applications. What is more, the same concepts used here for full-structure noise learning could be employed in other contexts where hyperparameters like kernel widths in Gaussian process regression (Wu et al., 2019) or dictionary elements in the dictionary learning problem (Dikmen & Févotte, 2012) are to be inferred. Besides, using FUN learning algorithm may also prove useful for practical scenarios in which model residuals are expected to be correlated, e.g., probabilistic canonical correlation analysis (CCA) (Bach & Jordan, 2005), spectral independent component analysis (ICA) (Ablin et al., 2020), wireless communication (Prasad et al., 2015; Gerstoft et al., 2016; Haghighatshoar & Caire, 2017; Khalilsarai et al., 2020), robust portfolio optimization in finance (Feng et al., 2016), graph learning (Kumar et al., 2020), thermal field reconstruction (Flinth & Hashemi, 2018), and brain functional imaging (Wei et al., 2020).
Noise learning has also attracted attention in functional magnetic resonance imaging (fMRI) (Cai et al., 2016; Shvartsman et al., 2018; Cai et al., 2019b; 2020b; Wei et al., 2020), where various models like matrix-normal (MN), factor analysis (FA), and Gaussian-process (GP) regression have been proposed. The majority of the noise learning algorithms in the fMRI literature rely on the EM framework, which is quite slow in practice and has convergence guarantees only under certain strong conditions. In contrast to these existing approaches, our proposed framework not only applies to the models considered in these papers, but also benefits from theoretically proven convergence guarantees. To be more specific, we showed in this paper that FUN learning is an instance of the wider class of majorization-minimization (MM) framework, for which provable fast convergence is guaranteed. It is worth emphasizing our contribution within the MM optimization context as well. In many MM implementations, surrogate functions are minimized using an iterative approach. Our proposed algorithm, however, obtains a closed-form solution for the surrogate function in each step, which further advances its efficiency.
In the context of BSI, Engemann & Gramfort (2015) proposed a method for selecting a single regularization parameter based on cross-validation and maximum-likelihood estimation, while Huizenga et al. (2002); De Munck et al. (2002); Bijma et al. (2003); De Munck et al. (2004); Ahn & Jun (2011); Jun et al. (2006) and Plis et al. (2006) assume more complex spatiotemporal noise covariance structures. A common limitation of these works is, however, that the noise level is not estimated as part of the source reconstruction problem on task-related data but from separate noise recordings. Our proposed algorithm substantially differs in this respect, as it learns the noise covariance jointly with the brain source distribution. Note that The idea of joint estimation of brain source activity and noise covariance has been previously proposed for Type-I learning methods in (Massias et al., 2018; Bertrand et al., 2019). In contrast to these Type-I methods, FUN is a Type-II method, which learns the prior source distribution as part of the model fitting. Type-II methods have been reported to yield consistently superior results than Type-I methods (Owen et al., 2012; Cai et al., 2019a; 2020a; Hashemi et al., 2020). Our numerical results show that the same hold also for FUN learning, which performs on par or better than existing variants from the Type-II family (including conventional Champagne) in this study. We plan to provide a formal comparison of the performance of noise learning within Type-I and Type-II estimation in our future work.
While being broadly applicable, our approach is also limited by a number of factors. Although Gaussian noise distributions are commonly justified, it would be desirable to also include more robust (e.g., heavy-tailed) non-Gaussian noise distributions in our framework. Another limitation is that the superior performance of the full-structure noise learning technique comes at the expense of higher computational complexity compared to the variants assuming homoscedastic or heteroscedastic strucutre. Besides, signals in real-world scenarios often lie in a lower-dimensional space compared to the original high-dimensional ambient space due to the particular correlations that inherently exist in the structure of the data. Therefore, imposing physiologically plausible constraints on the noise model, e.g., low-rank or Toeplitz structure, not only provides side information that can be leveraged for the reconstruction but also reduces the computational cost in two ways: a) by reducing the number of parameters and b) by taking advantage of efficient implementations using circular embeddings and the fast Fourier transform (Babu, 2016). Exploring efficient ways to incorporate these structural assumptions within a Riemannian framework is another direction of future work.
6 CONCLUSION
This paper proposes an efficient optimization algorithm for jointly estimating Gaussian regression parameter distributions as well as Gaussian noise distributions with full covariance structure within a hierarchical Bayesian framework. Using the Riemannian geometry of positive definite matrices, we derived an efficient algorithm for jointly estimating source and noise covariances. The benefits of our proposed framework were evaluated within an extensive set of experiments in the context of electromagnetic brain source imaging inverse problem and showed significant improvement upon state-of-the-art techniques in the realistic scenario where the noise has full covariance structure. The performance of our method is assessed through a real data analysis for the auditory evoked field (AEF) dataset.
A PROOF OF THEOREM 1
Proof. We start the proof by recalling equation 8:
LII(Γ,Λ) = − log p(Y|Γ,Λ) = log|Σy|+ 1
T T∑ t=1 y(t)>Σ−1y y(t) . (16)
The upper bound on the log |Σy| term can be directly inferred from the concavity of the logdeterminant function and its first-order Taylor expansion around the value from the previous iteration, Σky, which provides the following inequality (Sun et al., 2017, Example 2):
log |Σy| ≤ log ∣∣Σky∣∣+ tr [(Σky)−1 (Σy −Σky)]
= log ∣∣Σky∣∣+ tr [(Σky)−1 Σy]− tr [(Σky)−1 Σky] . (17)
Note that the first and last terms in equation 17 do not depend on Γ; hence, they can be ignored in the optimization procedure. Now, we decompose Σy into two terms, each of which only contains either the noise or source covariances:
tr [( Σky )−1 Σy ] = tr [( Σky )−1 ( Λ + LΓL> )] = tr [( Σky )−1 Λ ] + tr [( Σky )−1 LΓL> ] . (18)
In next step, we decompose the second term in equation 8, 1T ∑T t=1 y(t)
>Σ−1y y(t), into two terms, each of which is a function of either only the noise or only the source covariances. To this end, we exploit the following relationship between sensor and source space covariances:
1
T T∑ t=1 y(t)>Σ−1y y(t) = 1 T T∑ t=1 xk(t)>Γ−1xk(t) + 1 T T∑ t=1 (y(t)− Lxk(t))>Λ−1(y(t)− Lxk(t)) .
(19)
By combining equation 18 and equation 19, rearranging the terms, and ignoring all terms that do not depend on Γ, we have:
LII(Γ) ≤ tr [( Σky )−1 LΓL> ] + 1
T T∑ t=1 xk(t)>Γ−1xk(t) + const
= tr( ( CkS )−1 Γ) + tr(MkSΓ −1) + const = Lconvsource(Γ,Λk) + const , (20)
where CkS = ( L> ( Σky )−1 L )−1 and MkS = 1 T ∑T t=1 x
k(t)xk(t)>. Note that constant values in equation 20 do not depend on Γ; hence, they can be ignored in the optimization procedure. This
proves the equivalence of equation 8 and equation 9 when the optimization is performed with respect to Γ.
The equivalence of equation 8 and equation 11 can be shown analogously, with the difference that we only focus on noise-related terms in equation 18 and equation 19:
LII(Λ) ≤ tr [( Σky )−1 Λ ] + 1
T T∑ t=1 (y(t)− Lxk(t))>Λ−1(y(t)− Lxk(t)) + const
= tr( ( CkN )−1 Λ) + tr(MkNΛ −1) + const = Lconvnoise(Γk,Λ) + const , (21)
where CkN = Σ k y, and M k N = 1 T ∑T t=1(y(t) − Lxk(t))(y(t) − Lxk(t))>. Constant values in equation 21 do not depend on Λ; hence, they can again be ignored in the optimization procedure. Summarizing, we have shown that optimizing equation 8 is equivalent to optimizing Lconvnoise(Γk,Λ) and Lconvsource(Γ,Λk), which concludes the proof.
B PROOF OF THEOREM 2
Before presenting the proof, the subsequent definitions and propositions are required: Definition 4 (Geodesic path). Let M be a Riemannian manifold, i.e., a differentiable manifold whose tangent space is endowed with an inner product that defines local Euclidean structures. Then, a geodesic between two points onM, denoted by p0,p1 ∈M, is defined as the shortest connecting path between those two points along the manifold, ζl(p0,p1) ∈ M for l ∈ [0, 1], where l = 0 and l = 1 defines the starting and end points of the path, respectively.
In the current context, ζl(p0,p1) defines a geodesic curve on the positive definite (P.D.) manifold joining two P.D. matrices, P0,P1 > 0. The specific pairs of matrices we will deal with are {CkS,MkS} and {CkN,MkN}. Definition 5 (Geodesic on the P.D. manifold). Geodesics on the manifold of P.D. matrices can be shown to form a cone within the embedding space. We denote this manifold by S++. Assume two P.D. matrices P0,P1 ∈ S++. Then, for l ∈ [0, 1], the geodesic curve joining P0 to P1 is defined as (Bhatia, 2009, Chapter. 6):
ξl(P0,P1) = (P0) 1 2 ( (P0) −1/2P1(P0) −1/2 )l (P0) 1 2 l ∈ [0, 1] . (22)
Note that P0 and P1 are obtained as the starting and end points of the geodesic path by choosing l = 0 and l = 1, respectively. The midpoint of the geodesic, obtained by setting l = 12 , is called the geometric mean. Note that, according to Definition 5, the following equality holds :
ξl(Γ0,Γ1) −1 = ( (Γ0) 1/2 ( (Γ0) −1/2Γ1(Γ0) −1/2 )l (Γ0) 1/2 )−1 = ( (Γ0) −1/2 ( (Γ0) 1/2(Γ1) −1(Γ0) 1/2 )l (Γ0) −1/2 ) = ξl(Γ −1 0 ,Γ −1 1 ) . (23)
Definition 6 (Geodesic convexity). Let p0 and p1 be two arbitrary points on a subset A of a Riemannian manifoldM. Then a real-valued function f with domain A ⊂M with f : A → R is called geodesic convex (g-convex) if the following relation holds:
f (ζl(p0,p1)) ≤ lf(p0) + (1− l)f(p1) , (24) where l ∈ [0, 1] and ζ(p0,p1) denotes the geodesic path connecting two points p0 and p1 as defined in 4. Thus, in analogy to classical convexity, the function f is g-convex if every geodesic ζ(p0,p1) ofM between p0,p1 ∈ A, lies in the g-convex set A. Note that the set A ⊂M is called g-convex, if any geodesics joining an arbitrary pair of points lies completely in A. Remark 7. Note that g-convexity is a generalization of classical (linear) convexity to non-Euclidean (non-linear) geometry and metric spaces. Therefore, it is straightforward to show that all convex functions in Euclidean geometry are also g-convex, where the geodesics between pairs of matrices are simply line segments:
ζl(p0,p1) = lp0 + (1− l)p1 . (25)
For the sake of brevity, we omit a detailed theoretical introduction of g-convexity, and only borrow a result from Zadeh et al. (2016); Sra & Hosseini (2015). Interested readers are referred to Wiesel et al. (2015, Chapter 1) for a gentle introduction to this topic, and Papadopoulos (2005, Chapter. 2) Rapcsak (1991); Ben-Tal (1977); Liberti (2004); Pallaschke & Rolewicz (2013); Bonnabel & Sepulchre (2009); Moakher (2005); Sra & Hosseini (2016); Vishnoi (2018) for more in-depth technical details.
Now we are ready to state the proof, which parallels the one provided in Zadeh et al. (2016, Theorem. 3).
Proof. We only show the proof for Lconvsource(Γ,Λk). The proof for Lconvnoise(Γk,Λ) can be presented analogously; and therefore, is omitted here for brevity. We proceed in two steps. First, we limit our attention to P.D. manifolds and express equation 24 in terms of geodesic paths and functions that lie on this particular space. We then show that Lconvsource(Γ,Λk) is strictly g-convex on this specific domain. In the second step, we then derive the updates rules proposed in equation 13 and equation 14.
B.1 PART I: PROVING G-CONVEXITY OF THE MAJORIZING COST FUNCTIONS
We consider geodesics along the P.D. manifold by setting ζl(p0,p1) to ξl(Γ0,Γ1) as presented in Definition 5, and define f(.) to be f(Γ) = tr(CkSΓ) + tr(M k SΓ −1), representing the cost function Lconvsource(Γ,Λk). We now show that f(Γ) is strictly g-convex on this specific domain. For continuous functions as considered in this paper, fulfilling equation 24 for f(Γ) and ξl(Γ0,Γ1) with l = 1/2 is sufficient to prove strict g-convexity:
tr ( CkSξ1/2(Γ0,Γ1) ) + tr ( MkSξ1/2(Γ0,Γ1) −1 )
< 1 2 tr ( CkSΓ0 ) + 1 2 tr ( MkSΓ0 −1) + 1
2 tr ( CkSΓ1 ) + 1 2 tr ( MkSΓ1 −1) . (26) Given CkS ∈ S++, i.e., CkS > 0 and the operator inequality (Bhatia, 2009, Chapter. 4)
ξ1/2(Γ0,Γ1) ≺ 1
2 Γ0 +
1 2 Γ1 , (27)
we have:
tr ( CkSξ1/2(Γ0,Γ1) ) < 1 2 tr ( CkSΓ0 ) + 1 2 tr ( CkSΓ1 ) , (28)
which is derived by multiplying both sides of equation 27 with CkS followed by taking the trace on both sides.
Similarly, we can write the operator inequality for {Γ−10 ,Γ −1 1 } using equation 23 as:
ξ1/2(Γ0,Γ1) −1 = ξ1/2(Γ −1 0 ,Γ −1 1 ) ≺
1 2 Γ−10 + 1 2 Γ−11 , (29)
Multiplying both sides of equation 29 by MkS ∈ S++, and applying the trace operator on both sides leads to:
tr ( MkSξ1/2(Γ0,Γ1) −1 ) < 1 2 tr ( MkSΓ0 −1)+ 1 2 tr ( MkSΓ1 −1) . (30) Summing up equation 28 and equation 30 proves equation 26 and concludes the first part of the proof.
B.2 PART II: DETAILED DERIVATION OF THE UPDATE RULES IN EQUATIONS 13 AND 14
We now present the second part of the proof by deriving the update rules in equations 13 and 14. Since the cost function Lconvsource(Γ,Λk) is strictly g-convex, its optimal solution in the k-th iteration
is unique. More concretely, the optimum can be analytically derived by taking the derivative of equation 9 and setting the result to zero as follows:
∇Lconvsource(Γ,Λk) = ( CkS )−1 − Γ−1MkSΓ−1 = 0 , (31)
which results in
Γ ( CkS )−1 Γ = MkS . (32)
This solution is known as the Riccati equation, and is the geometric mean between CkS and M k S (Davis et al., 2007; Bonnabel & Sepulchre, 2009):
Γk+1 = (CkS) 1 2 ( (CkS) −1/2MkS(C k S) −1/2 ) 1 2 (CkS) 1 2 .
The update rule for the full noise covariance matrix can be derived analogously:
Λk+1 = (CkN) 1 2 ( (CkN) −1/2MkN(C k N) −1/2 ) 1 2 (CkN) 1 2 .
Remark 8. Note that the obtained update rules are closed-form solutions for the surrogate cost functions, equations 9 and 11, which stands in contrast to conventional majorization minimization algorithms (see section C in the appendix), which require iterative procedures in each step of the optimization.
Deriving the update rules in equation 13 and equation 14 concludes the second part of the proof of Theorem 2.
C PROOF OF THEOREM 3
In the following, we provide proof for Theorem 3 by showing that alternating update rules for Γ and Λ in equation 13 and equation 14 are guaranteed to converge to a local minimum of the Bayesian Type-II likelihood equation 8. In particular, we will prove that FUN learning is an instance of the general class of majorization-minimization (MM) algorithms, for which this property follows by construction. To this end, we first briefly review theoretical concepts behind the majorizationminimization (MM) algorithmic framework (Hunter & Lange, 2004; Razaviyayn et al., 2013; Jacobson & Fessler, 2007; Wu et al., 2010).
C.1 REQUIRED CONDITIONS FOR MAJORIZATION-MINIMIZATION ALGORITHMS
MM encompasses a family of iterative algorithms for optimizing general non-linear cost functions. The main idea behind MM is to replace the original cost function in each iteration by an upper bound, also known as majorizing function, whose minimum is easy to find. The MM class covers a broad range of common optimization algorithms such as convex-concave procedures (CCCP) and proximal methods (Sun et al., 2017, Section IV), (Mjolsness & Garrett, 1990; Yuille & Rangarajan, 2003; Lipp & Boyd, 2016). Such algorithms have been applied in various domains such as brain source imaging (Hashemi & Haufe, 2018; Bekhti et al., 2018; Cai et al., 2020a; Hashemi et al., 2020), wireless communication systems with massive MIMO technology (Masood et al., 2016; Haghighatshoar & Caire, 2017; Khalilsarai et al., 2020), and non-negative matrix factorization (Fagot et al., 2019). Interested readers are referred to Sun et al. (2017) for an extensive list of applications on MM.
The problem of minimizing a continuous function f(u) within a closed convex set U ⊂ Rn:
min u f(u) subject to u ∈ U , (33)
within the MM framwork can be summarized as follows. First, construct a continuous surrogate function g(u|uk) that majorizes, or upper-bounds, the original function f(u) and coincides with f(u) at a given point uk:
[A1] g(uk|uk) = f(uk) ∀ uk ∈ U [A2] g(u|uk) ≥ f(u) ∀ u,uk ∈ U .
Second, starting from an initial value u0, generate a sequence of feasible points u1,u2, . . . ,uk,uk+1 as solutions of a series of successive simple optimization problems, where
[A3] uk+1 := arg min u∈U g(u|uk) .
If a surrogate function fulfills conditions [A1]–[A3], then the value of the cost function f decreases in each iteration: f(uk+1) ≤ f(uk). For the smooth functions considered in this paper, we further require that the derivatives of the original and surrogate functions coincide at uk:
[A4] ∇g(uk|uk) = ∇f(uk) ∀ uk ∈ U .
We can then formulate the following theorem:
Theorem 9. Assume that an MM algorithm fulfills conditions [A1]–[A4]. Then, every limit point of the sequence of minimizers generated in [A3], is a stationary point of the original optimization problem in equation 33.
Proof. A detailed proof is provided in Razaviyayn et al. (2013, Theorem 1).
C.2 DETAIL DERIVATION OF THE PROOF OF THEOREM 3
We now show that FUN learning is an instance of majorization-minimization as defined above, which fulfills Theorem 9.
Proof. We need to prove that conditions [A1]–[A4] are fulfilled for FUN learning. To this end, we recall the upper bound on log |Σy| in equation 17, which fulfills condition [A2] since it majorizes log |Σy| as a result of the concavity of the log-determinant function and its first-order Taylor expansion around Σky. Besides, it automatically satisfies conditions [A1] and [A4] by construction, because the majorizing function in equation 17 is obtained through a Taylor expansion around Σky. Concretely, [A1] is satisfied because the equality in equation 17 holds for Σy = Σky. Similarly, [A4]
is satisfied because the gradient of log |Σy| at point Σky, ( Σky )−1 defines the linear Taylor approxi-
mation log ∣∣Σky∣∣+ tr [(Σky)−1 (Σy −Σky)]. Thus, both gradients coincide in Σky by construction.
Now, we prove that [A3] can be satisfied by showing that Lconvsource(Γ,Λk) reaches its global minimum in each MM iteration. This is guaranteed if Lconvsource(Γ,Λk) can be shown to be convex or g-convex with respect to Γ. To this end, we first require the subsequent proposition:
Proposition 10. Any local minimum of a g-convex function over a g-convex set is a global minimum.
Proof. A detailed proof is presented in Rapcsak (1991, Theorem 2.1).
Given the proof presented in appendix B.1, we can conclude that equation 20 is g-convex; hence, any local minimum ofLconvsource(Γ,Λk) is a global minimum according to Proposition 10. This proves that condition [A3] is fulfilled and completes the proof that the optimization of equation 8 with respect to Γ using the convex surrogate cost function equation 9 leads to an MM algorithm. For the sake of brevity, we omit the proof for the optimization with respect to Λ based on the convex surrogate function in equation 11, Lconvnoise(Γk,Λ), as it can be presented, analogously.
D DERIVATION OF CHAMPAGNE AS A SPECIAL CASE OF FUN LEARNING
We start the derivation of update rule equation 15 by constraining Γ to the set of diagonal matrices W: Γ = diag(γ), where γ = [γ1, . . . , γN ]>. We continue by rewriting the constrained optimization with respect to the source covariance matrix,
Γk+1 = arg min Γ∈W, Λ=Λk tr(CkSΓ) + tr(M k SΓ −1) , (34)
as follows:
γk+1 = arg min γ, Λ=Λk
diag [( CkS )−1] γ + diag [ MkS ] γ−1︸ ︷︷ ︸
Ldiagsource(γ|γk)
, (35)
where γ−1 = [γ−11 , . . . , γ −1 N ] > is defined as the element-wise inversion of γ. The optimization with respect to the scalar source variances is then carried out by taking the derivative of equation 35 with respect to γn, for n = 1, . . . , N , and setting it to zero,
∂
∂γn
([( CkS )−1] γn + [ MkS ] γ−1n ) = [( CkS )−1]
n,n − 1 (γn)2 [ MkS ] n,n
= 0 for n = 1, . . . , N ,
where Ln denotes the n-th column of the lead field matrix. This yields the following update rule
γk+1n ← √√√√√ [ MkS ] n,n[(
CkS )−1]
n,n
= √√√√ 1T ∑Tt=1(xkn(t))2 L>n ( Σky )−1 Ln for n = 1, . . . , N ,
which is identical to the update rule of Champagne (Wipf & Nagarajan, 2009).
E DERIVATION OF CHAMPAGNE WITH HETEROSCEDASTIC NOISE LEARNING
AS A SPECIAL CASE OF FUN LEARNING
Similar to Appendix D, we start by constraining Λ to the set of diagonal matricesW: Λ = diag(λ), where λ = [λ1, . . . , λM ]>. We continue by reformulating the constrained optimization with respect to the noise covariance matrix,
Λk+1 = arg min Λ∈W, Γ=Γk tr(CkNΛ) + tr(M k NΛ −1) , (36)
as follows:
λk+1 = arg min λ, Γ=Γk
diag [( CkN )−1] λ + diag [ MkN ] λ−1︸ ︷︷ ︸
Ldiagnoise(λ|λk)
, (37)
where λ−1 = [λ−11 , . . . , λ −1 M ] > is defined as the element-wise inversion of λ. The optimization with respect to the scalar noise variances then proceeds by taking the derivative of equation 37 with respect to λm, for m = 1, . . . ,M , and setting it to zero,
∂
∂λm
([( CkN )−1] λm + [ MkN ] λ−1m ) = [( CkN )−1]
m,m − 1 (λm)2 [ MkN ] m,m
= 0 for m = 1, . . . ,M .
This yields the following update rule:
λk+1m ← √√√√√ [ MkN ] m,m[(
CkN )−1]
m,m
= √√√√√√ [ 1 T ∑T t=1(y(t)− Lxk(t))(y(t)− Lxk(t))> ] m,m[(
Σky )−1]
m,m
for m = 1, . . . ,M , (38)
which is identical to the update rule of the Champagne with heteroscedastic noise learning as presented in Cai et al. (2020a).
F u
lls tr
u c tu
re N
o is e H e te ro s c e d a s ti c N o is e
Figure 4: Accuracy of the noise covariance matrix reconstruction incurred by three different noise learning approaches assuming homoscedastic (red), heteroscedastic (green) and full-structure (blue) noise covariances. The ground-truth noise covariance matrix is either full-structure (upper row) or heteroscedastic diagonal (lower row). Performance is assessed in terms of the Pearson correlation between the entries of the original and reconstructed noise covariance matrices, Λ and Λ̂, denoted by Λsim (left column). Shown is the similarity error 1 − Λsim. Further, the normalized mean squared error (NMSE) between Λ and Λ̂, defined as NMSE = ||Λ̂−Λ||2F /||Λ||2F is reported (right column).
F PSEUDO-EEG SIGNAL GENERATION
Our simulation setting is an adoption of the EEG inverse problem, where brain activity is to be reconstructed from simulated pseudo-EEG data (Haufe & Ewald, 2016).
Forward Modeling: Populations of pyramidal neurons in the cortical gray matter are known to be the main drivers of the EEG signal (Hämäläinen et al., 1993; Baillet et al., 2001). Here, we use a realistic volume conductor model of the human head to model the linear relationship between primary electrical source currents generated within these populations and the resulting scalp surface potentials captured by EEG electrodes. The lead field matrix, L ∈ R58×2004, was generated using the New York Head model (Huang et al., 2016) taking into account the realistic anatomy and electrical tissue conductivities of an average human head. In this model, 2004 dipolar current sources were placed evenly on the cortical surface and 58 sensors were considered. The lead field matrix, L ∈ R58×2004 was computed using the finite element method. Note that the orientation of all source currents was fixed to be perpendicular to the cortical surface, so that only scalar source amplitudes needed to be estimated.
Evaluation Metrics: Source reconstruction performance was evaluated according to the following metrics. First, the earth mover’s distance (EMD) (Rubner et al., 2000; Haufe et al., 2008)) was used to quantify the spatial localization accuracy. The EMD measures the cost needed to transform two probability distributions defined on the same metric domain (in this case, distributions of the true and estimated sources defined in 3D Euclidean brain space) into each other. EMD scores were normalized to [0, 1]. Second, the error in the reconstruction of the source time courses was measured. To this end, Pearson correlation between all pairs of simulated and reconstructed (i.e., those with non-zero activations) source time courses was assessed as the mean of the absolute correlations obtained for each source, after optimally matching simulated and reconstructed sources based on maximal absolute correlation. We also report another metric for evaluating the localization error as the average Euclidean distance (EUCL) (in mm) between each simulated source and the best (in terms of absolute correlations) matching reconstructed source. For assessing the recovery of the true support, we also compute F1-measure scores (Chinchor & Sundheim, 1993; van Rijsbergen, 1979): F1 = 2×TP/P+TP+FP , where P denotes the number of true active sources, while TP and FP are the numbers of true and false positive predictions. Note that perfect support recovery, i.e., F1 = 1, is only achieved when there is a perfect correspondence between ground-truth and estimated support.
To evaluate the accuracy of the noise covariance matrix estimation, the following two metrics were calculated: the Pearson correlation between the original and reconstructed noise covariance matrices, Λ and Λ̂, denoted by Λsim, and the normalized mean squared error (NMSE) between Λ and Λ̂, defined as: NMSE = ||Λ̂ − Λ||2F /||Λ||2F . Similarity error was then defined as one minus the Pearson correlation: 1 − Λsim. Note that NMSE measures the reconstruction of the true scale of the noise covariance matrix, while Λsim is scale-invariant and hence only quantifies the overall structural similarity between simulated and estimated noise covariance matrices.
Evaluating the accuracy of the noise covariance matrix estimation: Figure 4 depicts the accuracy with which the covariance matrix is reconstructed by three different noise learning approaches assuming noise with homoscedastic (red), heteroscedastic (green) and full (blue) structure. The ground truth noise covariance matrix either had full (upper row) or heteroscedastic (lower row) structure. Performance was measured in terms of similarity error and NMSE. Similar to the trend observed in Figure 2, full-structure noise learning leads to better noise covariance estimation accuracy (lower NMSE and similariy error) for the full-structure noise model, while superior reconstruction performance is achieved for heteroscedastic noise learning when true noise covariance is heteroscedastic.
G FURTHER DETAILS ON AUDITORY EVOKED FIELDS (AEF) DATASET
The MEG data used in this article were acquired in the Biomagnetic Imaging Laboratory at the University of California San Francisco (UCSF) with a CTF Omega 2000 whole-head MEG system from VSM MedTech (Coquitlam, BC, Canada) with 1200 Hz sampling rate. The lead field for each subject was calculated with NUTMEG (Dalal et al., 2004) using a single-sphere head model (two spherical orientation lead fields) and an 8 mm voxel grid. Each column was normalized to have a norm of unity. The neural responses of one subject to an Auditory Evoked Fields (AEF) stimulus were localized. The AEF response was elicited with single 600 ms duration tones (1 kHz) presented binaurally. 120 trials were collected for AEF dataset. The data were first digitally filtered from 1 to 70 Hz to remove artifacts and DC offset, time-aligned to the stimulus, and then averaged across the following number of trials:{1,2,12, 63,120}. The pre-stimulus window was selected to be 100 ms to 5 ms and the post-stimulus time window was selected to be 60 ms to 180 ms, where 0 ms is the onset of the tone (Wipf et al., 2010; Dalal et al., 2011; Owen et al., 2012; Cai et al., 2019a). | 1. What is the focus of the paper regarding Type-II ML regression models?
2. What are the strengths and weaknesses of the proposed approach, particularly in its technical contribution and novelty?
3. How does the reviewer assess the relevance and suitability of the paper for the ICLR community?
4. Are there any questions or concerns regarding the paper's experimental results and their demonstration of the method's effectiveness?
5. How does the reviewer evaluate the paper's overall quality and impact, and what suggestions do they have for improving its relevance and applicability to the broader field? | Review | Review
Summary: The paper generalised Type-II ML regression models for scenarios where different noise dimensions cannot be assumed independent, but instead one needs to model the full covariance structure. This is clearly an important problem and it is well motivated in the work.
Reasons for score: I recommend rejecting the paper even though it represents high-quality work in statistics, because I think it is somewhat tangentially related to ICLR and the contribution would be better appreciated in a different venue.
Strong points: (1) Addresses an important problem. (2) Seems to work well in practice
Weaknesses: (1) Limited conceptual novelty. (2)Technical contribution hidden in Appendix
Detailed review: The work addresses a relevant statistical question of accounting for correlated noise in hierarchical linear regression, but feels somewhat of a poor fit for ICLR. It formally fits within the scope, but still feels out of place in the sense that neither readers interested in the theoretical contributions nor people looking to apply these methods would consider ICLR as a natural venue to look for the information. The development is restricted to a specific, relatively simple, model family that is frequently used in several fields but that is not at the core of the ICLR community. This is highlighted also by the fact that the technical contribution is largely in statistical properties of the covariance estimator, and for this audience gets hidden in the Appendix. Consequently, I believe that paper would much more naturally fit into a publication forum in statistics.
The proposed approach itself is sound and well developed. Accounting for correlated noise is a very obvious thing to do, but the technical details are non-trivial. The authors rely on Riemannian optimisation for covariance matrices and are able to use the recent Champagne algorithm for SBL. The detailed derivation of Theorem 2 shows non-trivial technical contribution, but remains somewhat isolated as it is hidden in the Appendix. For example, there is no discussion on whether the result derived here would have uses also in other model families. I can see several potential uses for better tools for learning full covariance noise e.g. in matrix factorisation models (e.g. probabilistic CCA relies on covariance estimates) or non-linear regression models, but the authors do not discuss this at all. A proper discussion on this would be important to link the work more closely to the broader activities in the field, to extend the contribution beyond the current viewpoint of a very specific model.
The empirical experiments are well carried out and demonstrate the value of learning the full covariance matrix compared to methods that only operate with diagonal noise. This is sufficient, since no clear comparison methods accounting for full covariance are available. |
ICLR | Title
Joint Learning of Full-structure Noise in Hierarchical Bayesian Regression Models
Abstract
We consider hierarchical Bayesian (type-II maximum likelihood) models for observations with latent variables for source and noise, where both hyperparameters need to be estimated jointly from data. This problem has application in many domains in imaging including biomagnetic inverse problems. Crucial factors influencing accuracy of source estimation are not only the noise level but also its correlation structure, but existing approaches have not addressed estimation of noise covariance matrices with full structure. Here, we consider the reconstruction of brain activity from electroencephalography (EEG). This inverse problem can be formulated as a linear regression with independent Gaussian scale mixture priors for both the source and noise components. As a departure from classical sparse Bayesan learning (SBL) models where across-sensor observations are assumed to be independent and identically distributed, we consider Gaussian noise with full covariance structure. Using Riemannian geometry, we derive an efficient algorithm for updating both source and noise covariance along the manifold of positive definite matrices. Using the majorization-maximization framework, we demonstrate that our algorithm has guaranteed and fast convergence. We validate the algorithm both in simulations and with real data. Our results demonstrate that the novel framework significantly improves upon state-of-the-art techniques in the real-world scenario where the noise is indeed non-diagonal and fully-structured.
1 INTRODUCTION
Having precise knowledge of the noise distribution is a fundamental requirement for obtaining accurate solutions in many regression problems (Bungert et al., 2020). In many applications however, it is impossible to separately estimate this noise distribution, as distinct ”noise-only” (baseline) measurements are not feasible. An alternative, therefore, is to design estimators that jointly optimize over the regression coefficients as well as over parameters of the noise distribution. This has been pursued both in a (penalized) maximum-likelihood settings (here referred to as Type-I approaches) (Petersen & Jung, 2020; Bertrand et al., 2019; Massias et al., 2018) as well as in hierarchical Bayesian settings (referred to as Type-II) (Wipf & Rao, 2007; Zhang & Rao, 2011; Hashemi et al., 2020; Cai et al., 2020a). Most contributions in the literature are, however, limited to the estimation of only a diagonal noise covariance (i.e., independent between different measurements) (Daye et al., 2012; Van de Geer et al., 2013; Dalalyan et al., 2013; Lederer & Muller, 2015). Considering a diagonal noise covariance is a limiting assumption in practice as the noise interference in many realistic scenarios are highly correlated across measurements; and thus, have non-trivial off-diagonal elements.
This paper develops an efficient optimization algorithm for jointly estimating the posterior of regression parameters as well as the noise distribution. More specifically, we consider linear regression with Gaussian scale mixture priors on the parameters and a full-structure multivariate Gaussian noise. We cast the problem as a hierarchical Bayesian (type-II maximum-likelihood) regression problem, in which the variance hyperparameters and the noise covariance matrix are optimized by maximizing the Bayesian evidence of the model. Using Riemannian geometry, we derive an efficient algorithm for jointly estimating the source and noise covariances along the manifold of positive definite (P.D.) matrices.
To highlight the benefits of our proposed method in practical scenarios, we consider the problem of electromagnetic brain source imaging (BSI). The goal of BSI is to reconstruct brain activity
from magneto- or electroencephalography (M/EEG), which can be formulated as a sparse Bayesian learning (SBL) problem. Specifically, it can be cast as a linear Bayesian regression model with independent Gaussian scale mixture priors on the parameters and noise. As a departure from the classical SBL approaches, here we specifically consider Gaussian noise with full covariance structure. Prominent source of correlated noise in this context are, for example, eye blinks, heart beats, muscular artifacts and line noise. Other realistic examples for the need for such full-structure noise can be found in the areas of array processing (Li & Nehorai, 2010) or direction of arrival (DOA) estimation (Chen et al., 2008). Algorithms that can accurately estimate noise with full covariance structure are expected to achieve more accurate regression models and predictions in this setting.
2 TYPE-II BAYESIAN REGRESSION
We consider the linear model Y = LX + E, in which a forward or design matrix, L ∈ RM×N , is mapped to the measurements, Y, by a set of coefficients or source components, X. Depending on the setting, the problem of estimating X given L and Y is called an inverse problem in physics, a multitask regression problem in machine learning, or a multiple measurement vector (MMV) recovery problem in signal processing (Cotter et al., 2005). Adopting a signal processing terminology, the measurement matrix Y ∈ RM×T captures the activity of M sensors at T time instants, y(t) ∈ RM×1, t = 1, . . . , T , while the source matrix, X ∈ RN×T , consists of the unknown activity of N sources at the same time instants, x(t) ∈ RN×1, t = 1, . . . , T . The matrix E = [e(1), . . . , e(T )] ∈ RM×T represents T time instances of zero-mean Gaussian noise with full covariance Λ, e(t) ∈ RM×1 ∼ N (0,Λ), t = 1, . . . , T , which is assumed to be independent of the source activations. In this paper, we focus on M/EEG based brain source imaging (BSI) but the proposed algorithm can be used in general regression settings, in particular for sparse signal recovery (Candès et al., 2006; Donoho, 2006) with a wide range of applications (Malioutov et al., 2005). The goal of BSI is to infer the underlying brain activity X from the EEG/MEG measurement Y given a known forward operator, called lead field matrix L. As the number of sensors is typically much smaller than the number of locations of potential brain sources, this inverse problem is highly ill-posed. This problem is addressed by imposing prior distributions on the model parameters and adopting a Bayesian treatment. This can be performed either through Maximum-a-Posteriori (MAP) estimation (Type-I Bayesian learning) (Pascual-Marqui et al., 1994; Gorodnitsky et al., 1995; Haufe et al., 2008; Gramfort et al., 2012; Castaño-Candamil et al., 2015) or, when the model has unknown hyperparameters, through Type-II Maximum-Likelihood estimation (Type-II Bayesian learning) (Mika et al., 2000; Tipping, 2001; Wipf & Nagarajan, 2009; Seeger & Wipf, 2010; Wu et al., 2016).
In this paper, we focus on Type-II Bayesian learning, which assumes a family of prior distributions p(X|Θ) parameterized by a set of hyperparameters Θ. These hyper-parameters can be learned from the data along with the model parameters using a hierarchical Bayesian approach (Tipping, 2001; Wipf & Rao, 2004) through the maximum-likelihood principle:
ΘII := arg max Θ p(Y|Θ) = arg max Θ
∫ p(Y|X,Θ)p(X|Θ)dX . (1)
Here we assume a zero-mean Gaussian prior with full covariance Γ for the underlying source distribution, x(t) ∈ RN×1 ∼ N (0,Γ), t = 1, . . . , T . Just as most other approaches, Type-II Bayesian learning makes the simplifying assumption of statistical independence between time samples. This leads to the following expression for the distribution of the sources and measurements:
p(X|Γ) = T∏ t=1 p(x(t)|Γ) = T∏ t=1 N (0,Γ) (2)
p(Y|X) = T∏ t=1 p(y(t)|x(t)) = T∏ t=1 N (Lx(t),Λ) . (3)
The parameters of the Type-II model, Θ, are the unknown source and noise covariances, i.e., Θ = {Γ,Λ}. The unknown parameters Γ and Λ are optimized based on the current estimates of the source and noise covariances in an alternating iterative process. Given initial estimates of Γ and Λ,
the posterior distribution of the sources is a Gaussian of the form (Sekihara & Nagarajan, 2015)
p(X|Y,Γ) = T∏ t=1 N (µx(t),Σx) ,where (4)
µx(t) = ΓL >(Σy) −1y(t) (5)
Σx = Γ− ΓL>(Σy)−1LΓ (6) Σy = Λ + LΓL
> . (7) The estimated posterior parameters µx(t) and Σx are then in turn used to update Γ and Λ as the minimizers of the negative log of the marginal likelihood p(Y|Γ,Λ), which is given by (Wipf et al., 2010):
LII(Γ,Λ) = − log p(Y|Γ,Λ) = log|Σy|+ 1
T T∑ t=1 y(t)>Σ−1y y(t)
= log|Λ + LΓL>|+ 1 T T∑ t=1 y(t)> ( Λ + LΓL> )−1 y(t) , (8)
where | · | denotes the determinant of a matrix. This process is repeated until convergence. Given the final solution of the hyperparameters ΘII = {ΓII,ΛII}, the posterior source distribution is obtained by plugging these estimates into equations 3 to 6.
3 PROPOSED METHOD: FULL-STRUCTURE NOISE (FUN) LEARNING
Here we propose a novel and efficient algorithm, full-structure noise (FUN) learning, which is able to learn the full covariance structure of the noise jointly within the Bayesian Type-II regression framework. We first formulate the algorithm in its most general form, in which both the noise distribution and the prior have full covariance structure. Later, we make the simplifying assumption of independent source priors, leading to the pruning of the majority of sources. This effect, which has also been referred to as automatic relevance determination (ARD) or sparse Bayesian learning (SBL) is beneficial in our application of interest, namely the reconstruction of parsimonious sets of brain sources underlying experimental EEG measurements.
Note that the Type-II cost function in equation 8 is non-convex and thus non-trivial to optimize. A number of iterative algorithms such as majorization-minimization (MM) (Sun et al., 2017) have been proposed to address this challenge. Following the MM scheme, we first construct convex surrogate functions that majorizes LII(Γ,Λ) in each iteration of the optimization algorithm. Then, we show the minimization equivalence between the constructed majoring functions and equation 8. This result is presented in the following theorem: Theorem 1. Let Λk and Σky be fixed values obtained in the (k)-th iteration of the optimization algorithm minimizing LII(Γ,Λ). Then, optimizing the non-convex type-II ML cost function in equation 8, LII(Γ,Λ), with respect to Γ is equivalent to optimizing the following convex function, which majorizes equation 8:
Lconvsource(Γ,Λk) = tr( ( CkS )−1 Γ) + tr(MkSΓ −1) , (9)
where CkS and M k S are defined as:
CkS := ( L> ( Σky )−1 L )−1 , MkS := 1
T T∑ t=1 xk(t)xk(t)> . (10)
Similarly, optimizing LII(Γ,Λ) with respect to Λ is equivalent to optimizing the following convex majorizing function:
Lconvnoise(Γk,Λ) = tr( ( CkN )−1 Λ) + tr(MkNΛ −1) , (11)
where CkN and M k N are defined as:
CkN := ( Σky ) , MkN := 1
T T∑ t=1 (y(t)− Lxk(t))(y(t)− Lxk(t))> . (12)
Proof. The proof is presented in Appendix A.
We continue by considering the optimization of the cost functionsLconvsource(Γ,Λk) andLconvnoise(Γk,Λ) with respect to Γ and Λ, respectively. Note that in case of source covariances with full structure, the solution of Lconvsource(Γ,Λk) with respect to Γ lies in the (N2 − N)/2 Riemannian manifold of positive definite (P.D.) matrices. This consideration enables us to invoke efficient methods from Riemannian geometry (see Petersen et al., 2006; Berger, 2012; Jost & Jost, 2008), which ensures that the solution at each step of the optimization is contained within the lower-dimensional solution space. Specifically, in order to optimize for the source covariance, the algorithm calculates the geometric mean between the previously obtained statistical model source covariance, CkS, and the source-space sample covariance matrix, MkS, in each iteration. Analogously, to update the noise covariance estimate, the algorithm calculates the geometric mean between the model noise covariance, CkN, and the empirical sensor-space residuals, M k N. The update rules obtained from this algorithm are presented in the following theorem:
Theorem 2. The cost functions Lconvsource(Γ,Λk) and Lconvnoise(Γk,Λ) are both strictly geodesically convex with respect to the P.D. manifold, and their optimal solution with respect to Γ and Λ, respectively, can be attained according to the two following update rules:
Γk+1 ← (CkS) 1 2 ( (CkS) −1/2MkS(C k S) −1/2 ) 1 2 (CkS) 1 2 , (13)
Λk+1 ← (CkN) 1 2 ( (CkN) −1/2MkN(C k N) −1/2 ) 1 2 (CkN) 1 2 . (14)
Proof. A detailed proof can be found in Appendix B.
Convergence of the resulting algorithm is shown in the following theorem.
Theorem 3. Optimizing the non-convex type-II ML cost function in equation 8, LII(Γ,Λ) with alternating update rules for Γ and Λ in equation 13 and equation 14 leads to an MM algorithm with guaranteed convergence guarantees.
Proof. A detailed proof can be found in Appendix C.
While Theorems 1–3 reflect a general joint learning algorithm, the assumption of sources with full covariance structure is often relaxed in practice. The next section will shed light on this important simplification by making a formal connection to SBL algorithms.
3.1 SPARSE BAYESIAN LEARNING WITH FULL NOISE MODELING
In brain source imaging, the assumption of full source covariance is often relaxed. Even if, technically, most parts of the brain are active at all times, and the concurrent activations of different brain regions can never be assumed to be fully uncorrelated, there are many experimental settings in which it is reasonable to assume only a small set of independent brain sources. Such sparse solutions are physiologically plausible in task-based analyses, where only a fraction of the brain’s macroscopic structures is expected to be consistently engaged. A common strategy in this case is to model independent sources through a diagonal covariance matrix. In the Type-II Bayesian learning framework, this simplification interestingly leads to sparsity of the resulting source distributions, as, at the optimum, many of the estimated source variances are zero. This mechanism is known as sparse Bayesian learning and is closely related to the more general concept of automatic relevance determination. Here, we adopt the SBL assumption for the sources, leading to Γ-updates previously described in the BSI literature under the name Champagne (Wipf & Nagarajan, 2009). As a novelty and main focus of this paper, we here equip the SBL framework with the capability to jointly learn full noise covariances through the geometric mean based update rule in equation 14. In the SBL framework, the N modeled brain sources are assumed to follow independent univariate Gaussian distributions with zero mean and distinct unknown variances γn: xn(t) ∼ N (0, γn), n = 1, . . . , N . In the SBL solution, the majority of variances is zero, thus effectively inducing spatial sparsity of the corresponding source activities. For FUN learning, we also impose a diagonal structure on the source covariance matrix, Γ = diag(γ), where γ = [γ1, . . . , γN ]>. By constraining Γ in equation 9
Algorithm 1: Full-structure noise (FUN) learning Input: The lead field matrix L ∈ RM×N and the measurement vectors y(t) ∈ RM×1, t = 1, . . . , T . Result: The estimated prior source variances [γ1, . . . , γN ]>, noise covariance Λ, the posterior
mean µx(t) and covariance Σx of the sources. 1 Set a random initial value for Λ as well as γ = [γ1, . . . , γN ]>, and construct Γ = diag(γ). 2 Calculate the statistical covariance Σy = Λ + LΓL>.
Repeat 3 Calculate the posterior mean as µx(t) = ΓL>(Σy)−1y(t). 4 Calculate CkS and M k S based on equation 10, and update γn for n = 1, . . . , N based on equation 15. 5 Calculate CkN and M k N based on equation 12, and update Λ based on equation 14.
Until stopping condition is satisfied; 6 Calculate the posterior covariance as Σx = Γ− ΓL>(Σy)−1LΓ.
to the set of diagonal matrices, W , we can show that the update rule equation 13 for the source variances simplifies to the following form:
γk+1n ← √√√√√ [ MkS ] n,n[(
CkS )−1]
n,n
= √√√√ 1T ∑Tt=1(xkn(t))2 L>n ( Σky )−1 Ln for n = 1, . . . , N , (15)
where Ln denotes the n-th column of the lead field matrix. Interestingly, equation 15 is identical to the update rule of the Champagne algorithm. A detailed derivation of equation 15 can be found in Appendix D.
Summarizing, the FUN learning approach, just like Champagne and other SBL algorithms, assumes independent Gaussian sources with individual variances (thus, diagonal source covariances), which are updated through equation equation 15. Departing from the classical SBL setting, which assumes the noise distribution to be known, FUN models noise with full covariance structure, which is updated using equation 14. Algorithm 1 summarizes the used update rules.
Note that various recent Type-II noise learning schemes for diagonal noise covariance matrices (Hashemi et al., 2020; Cai et al., 2020a) that are rooted in the concept of SBL can be also derived as special cases of FUN learning assuming diagonal source and noise covariances, i.e., Γ,Λ ∈ W . Specifically imposing diagonal structure on the noise covariance matrix for the FUN algorithm, Λ, results in identical noise variance update rules as derived in Cai et al. (2020a) for heteroscedastic, and in Hashemi et al. (2020) for homoscedastic noise. We explicitly demonstrate this connection in Appendix E. Here, we note that heteroscedasticity refers to the common phenomenon that measurements are contaminated with non-uniform noise levels across channels, while homoscedasticity only accounts for uniform noise levels.
4 NUMERICAL SIMULATIONS AND REAL DATA ANALYSIS
Source, Noise and Forward Model: We simulated a sparse set of N0 = 5 active brain sources that were placed at random positions on the cortex. To simulate the electrical neural activity of these sources, T = 200 identically and independently distributed (i.i.d) points were sampled from a Gaussian distribution, yielding sparse source activation vectors x(t). The resulting source distribution, represented as X = [x(1), . . . ,x(T )], was projected to the EEG sensors through application of lead field matrix as the forward operator: Ysignal = LX. The lead field matrix, L ∈ R58×2004, was generated using the New York Head model (Huang et al., 2016) taking into account the realistic anatomy and electrical tissue conductivities of an average human head. Further details regarding forward modeling is provided in Appendix F. Gaussian additive noise was randomly sampled from a zero-mean normal distribution with full covariance matrix Λ: e(t) ∈ RM×1 ∼ N (0,Λ), t = 1, . . . , T . This setting is further referred to as full-structure noise. Note that we also generated noise with diagonal covariance matrix, referred to as heteroscedastic noise, in order to investigate the effect of model violation on reconstruction performance. The
noise matrix E = [e(1), . . . , e(T )] ∈ RM×T was normalized by it Frobenius norm and added to the signal matrix Ysignal as follows: Y = Ysignal + ( (1−α)‖Ysignal‖ F/α‖E‖F ) E, where α determines the signal-to-noise ratio (SNR) in sensor space. Precisely, SNR is obtained as follows: SNR = 20log10 (α/1−α). In the subsequently described experiments the following values of α were used: α={0.3, 0.35, 0.4, 0.45, 0.5, 0.55, 0.65, 0.7, 0.8}, which correspond to the following SNRs: SNR={-12, -7.4, -5.4, -3.5, -1.7, 0, 1.7, 3.5, 5.4, 7.4, 12} (dB). MATLAB codes for producing the results in the simulation study are uploaded here.
Evaluation Metrics and Simulation Set-up: We applied the full-structure noise learning approach on the synthetic datasets described above to recover the locations and time courses of the active brain sources. In addition to our proposed approach, two further Type-II Bayesian learning schemes, namely Champagne with homo- and heteroscedastic noise learning (Hashemi et al., 2020; Cai et al., 2020a), were also included as benchmarks with respect to source reconstruction performance and noise covariance estimation accuracy. Source reconstruction performance was evaluated according to the earth mover’s distance (EMD) (Rubner et al., 2000)), the error in the reconstruction of the source time courses, the average Euclidean distance (EUCL) (in mm) between each simulated source and the best (in terms of absolute correlations) matching reconstructed source, and finally F1-measure score (Chinchor & Sundheim, 1993). A detailed definition of evaluation metrics is provided in Appendix F. To evaluate the accuracy of the noise covariance matrix estimation, the following two metrics were calculated: the Pearson correlation between the original and reconstructed noise covariance matrices, Λ and Λ̂, denoted by Λsim, and the normalized mean squared error (NMSE) between Λ and Λ̂, defined as NMSE = ||Λ̂ − Λ||2F /||Λ||2F . Note that NMSE measures the reconstruction of the true scale of the noise covariance matrix, while Λsim is scale-invariant and hence only quantifies the overall structural similarity between simulated and estimated noise covariance matrices. Each simulation was carried out 100 times using different instances of X and E, and the mean and standard error of the mean (SEM) of each performance measure across repetitions was calculated. Convergence of the optimization programs for each run was defined if the relative change of the Frobenius-norm of the reconstructed sources between subsequent iterations was less than 10−8. A maximum of 1000 iterations was carried out if no convergence was reached beforehand.
Figure 1 shows two simulated datasets with five active sources in presence of full-structure noise (upper panel) as well as heteroscedastic noise (lower panel) at 0 (dB) SNR. Topographic maps depict the locations of the ground-truth active brain sources (first column) along with the source reconstruction result of three noise learning schemes assuming noise with homoscedastic (second column), heteroscedastic (third column), and full (fourth column) structure. For each algorithm, the estimated noise covariance matrix is also plotted above the topographic map. Source reconstruction performance was measured in terms of EMD and time course correlation (Corr), and is summarized in the table next to each panel. Besides, the accuracy of the noise covariance matrix reconstruction was measured on terms of Λsim and NMSE. Results are included in the same table. Figure 1 (upper panel) allows for a direct comparison of the estimated noise covariance matrices obtained from the three different noise learning schemes. It can be seen that FUN learning can better capture the overall structure of ground truth full-structure noise as evidenced by lower NMSE and similarity errors compared to the heteroscedastic and homoscedastic algorithm variants that are only able to recover a diagonal matrix while enforcing the off-diagonal elements to zero. This behaviour results in higher spatial and temporal accuracy (lower EMD and time course error) for FUN learning compared to competing algorithms assuming diagonal noise covariance. This advantage is also visible in the topographic maps. The lower-panel of Figure 1 presents analogous results for the setting where the noise covariance is generated according to a heteroscedastic model. Note that the superior spatial and temporal reconstruction performance of the heteroscedastic noise learning algorithm compared to the full-structure scheme is expected here because the simulated ground truth noise is indeed heteroscedastic. The full-structure noise learning approach, however, provides fairly reasonable performance in terms of EMD, time course correlation (corr), and Λsim, although it is designed to estimate a full-structure noise covariance matrix. The convergence behaviour of all three noise learning variants is also illustrated in Figure 1. Note that the full-structure noise learning approach eventually reaches lower negative log-likelihood values in both scenarios, namely full-structure and heteroscedastic noise.
Figure 2 shows the EMD, the time course reconstruction error, the EUCL and the F1 measure score incurred by three different noise learning approaches assuming homoscedastic (red), heteroscedastic
(green) and full-structure (blue) noise covariances for a range of 10 SNR values. The upper panel represents the evaluation metrics for the setting where the noise covariance is full-structure model, while the lower-panel depicts the same metric for simulated noise with heteroscedastic diagonal covariance. Concerning the first setting, FUN learning consistently outperforms its homoscedastic and heteroscedastic counterparts according to all evaluation metrics in particular in low-SNR settings. Consequently, as the SNR decreases, the gap between FUN learning and the two other variants increases. Conversely, heteroscedastic noise learning shows an improvement over FUN learning according to all evaluation metrics when the simulated noise is indeed heteroscedastic. However, note that the magnitude of this improvement is not as large as observed for the setting where the noise covariance is generated according to a full-structure model and then is estimated using the FUN approach.
Analysis of Auditory Evoked Fields (AEF): Figure 3 shows the reconstructed sources of the Auditory Evoked Fields (AEF) versus number of trials from a single representative subject using FUN learning algorithm. Further details on this dataset can be found in Appendix G. We tested the reconstruction performance of FUN learning with the number of trials limited to 1, 2, 12, 63 and 120. Each reconstruction was performed 30 times with the specific trials themselves chosen as a random subset of all available trials. As the subplots for different trials demonstrate, FUN learning algorithm is able to correctly localize bilateral auditory activity to Heschel’s gyrus, which is the characteristic location of the primary auditory cortex, under a few trials or even a single trial.
5 DISCUSSION
This paper focused on sparse regression within the hierarchical Bayesian regression framework and its application in EEG/MEG brain source imaging. To this end we developed an algorithm, which is, however, suitable for a much wider range of applications. What is more, the same concepts used here for full-structure noise learning could be employed in other contexts where hyperparameters like kernel widths in Gaussian process regression (Wu et al., 2019) or dictionary elements in the dictionary learning problem (Dikmen & Févotte, 2012) are to be inferred. Besides, using FUN learning algorithm may also prove useful for practical scenarios in which model residuals are expected to be correlated, e.g., probabilistic canonical correlation analysis (CCA) (Bach & Jordan, 2005), spectral independent component analysis (ICA) (Ablin et al., 2020), wireless communication (Prasad et al., 2015; Gerstoft et al., 2016; Haghighatshoar & Caire, 2017; Khalilsarai et al., 2020), robust portfolio optimization in finance (Feng et al., 2016), graph learning (Kumar et al., 2020), thermal field reconstruction (Flinth & Hashemi, 2018), and brain functional imaging (Wei et al., 2020).
Noise learning has also attracted attention in functional magnetic resonance imaging (fMRI) (Cai et al., 2016; Shvartsman et al., 2018; Cai et al., 2019b; 2020b; Wei et al., 2020), where various models like matrix-normal (MN), factor analysis (FA), and Gaussian-process (GP) regression have been proposed. The majority of the noise learning algorithms in the fMRI literature rely on the EM framework, which is quite slow in practice and has convergence guarantees only under certain strong conditions. In contrast to these existing approaches, our proposed framework not only applies to the models considered in these papers, but also benefits from theoretically proven convergence guarantees. To be more specific, we showed in this paper that FUN learning is an instance of the wider class of majorization-minimization (MM) framework, for which provable fast convergence is guaranteed. It is worth emphasizing our contribution within the MM optimization context as well. In many MM implementations, surrogate functions are minimized using an iterative approach. Our proposed algorithm, however, obtains a closed-form solution for the surrogate function in each step, which further advances its efficiency.
In the context of BSI, Engemann & Gramfort (2015) proposed a method for selecting a single regularization parameter based on cross-validation and maximum-likelihood estimation, while Huizenga et al. (2002); De Munck et al. (2002); Bijma et al. (2003); De Munck et al. (2004); Ahn & Jun (2011); Jun et al. (2006) and Plis et al. (2006) assume more complex spatiotemporal noise covariance structures. A common limitation of these works is, however, that the noise level is not estimated as part of the source reconstruction problem on task-related data but from separate noise recordings. Our proposed algorithm substantially differs in this respect, as it learns the noise covariance jointly with the brain source distribution. Note that The idea of joint estimation of brain source activity and noise covariance has been previously proposed for Type-I learning methods in (Massias et al., 2018; Bertrand et al., 2019). In contrast to these Type-I methods, FUN is a Type-II method, which learns the prior source distribution as part of the model fitting. Type-II methods have been reported to yield consistently superior results than Type-I methods (Owen et al., 2012; Cai et al., 2019a; 2020a; Hashemi et al., 2020). Our numerical results show that the same hold also for FUN learning, which performs on par or better than existing variants from the Type-II family (including conventional Champagne) in this study. We plan to provide a formal comparison of the performance of noise learning within Type-I and Type-II estimation in our future work.
While being broadly applicable, our approach is also limited by a number of factors. Although Gaussian noise distributions are commonly justified, it would be desirable to also include more robust (e.g., heavy-tailed) non-Gaussian noise distributions in our framework. Another limitation is that the superior performance of the full-structure noise learning technique comes at the expense of higher computational complexity compared to the variants assuming homoscedastic or heteroscedastic strucutre. Besides, signals in real-world scenarios often lie in a lower-dimensional space compared to the original high-dimensional ambient space due to the particular correlations that inherently exist in the structure of the data. Therefore, imposing physiologically plausible constraints on the noise model, e.g., low-rank or Toeplitz structure, not only provides side information that can be leveraged for the reconstruction but also reduces the computational cost in two ways: a) by reducing the number of parameters and b) by taking advantage of efficient implementations using circular embeddings and the fast Fourier transform (Babu, 2016). Exploring efficient ways to incorporate these structural assumptions within a Riemannian framework is another direction of future work.
6 CONCLUSION
This paper proposes an efficient optimization algorithm for jointly estimating Gaussian regression parameter distributions as well as Gaussian noise distributions with full covariance structure within a hierarchical Bayesian framework. Using the Riemannian geometry of positive definite matrices, we derived an efficient algorithm for jointly estimating source and noise covariances. The benefits of our proposed framework were evaluated within an extensive set of experiments in the context of electromagnetic brain source imaging inverse problem and showed significant improvement upon state-of-the-art techniques in the realistic scenario where the noise has full covariance structure. The performance of our method is assessed through a real data analysis for the auditory evoked field (AEF) dataset.
A PROOF OF THEOREM 1
Proof. We start the proof by recalling equation 8:
LII(Γ,Λ) = − log p(Y|Γ,Λ) = log|Σy|+ 1
T T∑ t=1 y(t)>Σ−1y y(t) . (16)
The upper bound on the log |Σy| term can be directly inferred from the concavity of the logdeterminant function and its first-order Taylor expansion around the value from the previous iteration, Σky, which provides the following inequality (Sun et al., 2017, Example 2):
log |Σy| ≤ log ∣∣Σky∣∣+ tr [(Σky)−1 (Σy −Σky)]
= log ∣∣Σky∣∣+ tr [(Σky)−1 Σy]− tr [(Σky)−1 Σky] . (17)
Note that the first and last terms in equation 17 do not depend on Γ; hence, they can be ignored in the optimization procedure. Now, we decompose Σy into two terms, each of which only contains either the noise or source covariances:
tr [( Σky )−1 Σy ] = tr [( Σky )−1 ( Λ + LΓL> )] = tr [( Σky )−1 Λ ] + tr [( Σky )−1 LΓL> ] . (18)
In next step, we decompose the second term in equation 8, 1T ∑T t=1 y(t)
>Σ−1y y(t), into two terms, each of which is a function of either only the noise or only the source covariances. To this end, we exploit the following relationship between sensor and source space covariances:
1
T T∑ t=1 y(t)>Σ−1y y(t) = 1 T T∑ t=1 xk(t)>Γ−1xk(t) + 1 T T∑ t=1 (y(t)− Lxk(t))>Λ−1(y(t)− Lxk(t)) .
(19)
By combining equation 18 and equation 19, rearranging the terms, and ignoring all terms that do not depend on Γ, we have:
LII(Γ) ≤ tr [( Σky )−1 LΓL> ] + 1
T T∑ t=1 xk(t)>Γ−1xk(t) + const
= tr( ( CkS )−1 Γ) + tr(MkSΓ −1) + const = Lconvsource(Γ,Λk) + const , (20)
where CkS = ( L> ( Σky )−1 L )−1 and MkS = 1 T ∑T t=1 x
k(t)xk(t)>. Note that constant values in equation 20 do not depend on Γ; hence, they can be ignored in the optimization procedure. This
proves the equivalence of equation 8 and equation 9 when the optimization is performed with respect to Γ.
The equivalence of equation 8 and equation 11 can be shown analogously, with the difference that we only focus on noise-related terms in equation 18 and equation 19:
LII(Λ) ≤ tr [( Σky )−1 Λ ] + 1
T T∑ t=1 (y(t)− Lxk(t))>Λ−1(y(t)− Lxk(t)) + const
= tr( ( CkN )−1 Λ) + tr(MkNΛ −1) + const = Lconvnoise(Γk,Λ) + const , (21)
where CkN = Σ k y, and M k N = 1 T ∑T t=1(y(t) − Lxk(t))(y(t) − Lxk(t))>. Constant values in equation 21 do not depend on Λ; hence, they can again be ignored in the optimization procedure. Summarizing, we have shown that optimizing equation 8 is equivalent to optimizing Lconvnoise(Γk,Λ) and Lconvsource(Γ,Λk), which concludes the proof.
B PROOF OF THEOREM 2
Before presenting the proof, the subsequent definitions and propositions are required: Definition 4 (Geodesic path). Let M be a Riemannian manifold, i.e., a differentiable manifold whose tangent space is endowed with an inner product that defines local Euclidean structures. Then, a geodesic between two points onM, denoted by p0,p1 ∈M, is defined as the shortest connecting path between those two points along the manifold, ζl(p0,p1) ∈ M for l ∈ [0, 1], where l = 0 and l = 1 defines the starting and end points of the path, respectively.
In the current context, ζl(p0,p1) defines a geodesic curve on the positive definite (P.D.) manifold joining two P.D. matrices, P0,P1 > 0. The specific pairs of matrices we will deal with are {CkS,MkS} and {CkN,MkN}. Definition 5 (Geodesic on the P.D. manifold). Geodesics on the manifold of P.D. matrices can be shown to form a cone within the embedding space. We denote this manifold by S++. Assume two P.D. matrices P0,P1 ∈ S++. Then, for l ∈ [0, 1], the geodesic curve joining P0 to P1 is defined as (Bhatia, 2009, Chapter. 6):
ξl(P0,P1) = (P0) 1 2 ( (P0) −1/2P1(P0) −1/2 )l (P0) 1 2 l ∈ [0, 1] . (22)
Note that P0 and P1 are obtained as the starting and end points of the geodesic path by choosing l = 0 and l = 1, respectively. The midpoint of the geodesic, obtained by setting l = 12 , is called the geometric mean. Note that, according to Definition 5, the following equality holds :
ξl(Γ0,Γ1) −1 = ( (Γ0) 1/2 ( (Γ0) −1/2Γ1(Γ0) −1/2 )l (Γ0) 1/2 )−1 = ( (Γ0) −1/2 ( (Γ0) 1/2(Γ1) −1(Γ0) 1/2 )l (Γ0) −1/2 ) = ξl(Γ −1 0 ,Γ −1 1 ) . (23)
Definition 6 (Geodesic convexity). Let p0 and p1 be two arbitrary points on a subset A of a Riemannian manifoldM. Then a real-valued function f with domain A ⊂M with f : A → R is called geodesic convex (g-convex) if the following relation holds:
f (ζl(p0,p1)) ≤ lf(p0) + (1− l)f(p1) , (24) where l ∈ [0, 1] and ζ(p0,p1) denotes the geodesic path connecting two points p0 and p1 as defined in 4. Thus, in analogy to classical convexity, the function f is g-convex if every geodesic ζ(p0,p1) ofM between p0,p1 ∈ A, lies in the g-convex set A. Note that the set A ⊂M is called g-convex, if any geodesics joining an arbitrary pair of points lies completely in A. Remark 7. Note that g-convexity is a generalization of classical (linear) convexity to non-Euclidean (non-linear) geometry and metric spaces. Therefore, it is straightforward to show that all convex functions in Euclidean geometry are also g-convex, where the geodesics between pairs of matrices are simply line segments:
ζl(p0,p1) = lp0 + (1− l)p1 . (25)
For the sake of brevity, we omit a detailed theoretical introduction of g-convexity, and only borrow a result from Zadeh et al. (2016); Sra & Hosseini (2015). Interested readers are referred to Wiesel et al. (2015, Chapter 1) for a gentle introduction to this topic, and Papadopoulos (2005, Chapter. 2) Rapcsak (1991); Ben-Tal (1977); Liberti (2004); Pallaschke & Rolewicz (2013); Bonnabel & Sepulchre (2009); Moakher (2005); Sra & Hosseini (2016); Vishnoi (2018) for more in-depth technical details.
Now we are ready to state the proof, which parallels the one provided in Zadeh et al. (2016, Theorem. 3).
Proof. We only show the proof for Lconvsource(Γ,Λk). The proof for Lconvnoise(Γk,Λ) can be presented analogously; and therefore, is omitted here for brevity. We proceed in two steps. First, we limit our attention to P.D. manifolds and express equation 24 in terms of geodesic paths and functions that lie on this particular space. We then show that Lconvsource(Γ,Λk) is strictly g-convex on this specific domain. In the second step, we then derive the updates rules proposed in equation 13 and equation 14.
B.1 PART I: PROVING G-CONVEXITY OF THE MAJORIZING COST FUNCTIONS
We consider geodesics along the P.D. manifold by setting ζl(p0,p1) to ξl(Γ0,Γ1) as presented in Definition 5, and define f(.) to be f(Γ) = tr(CkSΓ) + tr(M k SΓ −1), representing the cost function Lconvsource(Γ,Λk). We now show that f(Γ) is strictly g-convex on this specific domain. For continuous functions as considered in this paper, fulfilling equation 24 for f(Γ) and ξl(Γ0,Γ1) with l = 1/2 is sufficient to prove strict g-convexity:
tr ( CkSξ1/2(Γ0,Γ1) ) + tr ( MkSξ1/2(Γ0,Γ1) −1 )
< 1 2 tr ( CkSΓ0 ) + 1 2 tr ( MkSΓ0 −1) + 1
2 tr ( CkSΓ1 ) + 1 2 tr ( MkSΓ1 −1) . (26) Given CkS ∈ S++, i.e., CkS > 0 and the operator inequality (Bhatia, 2009, Chapter. 4)
ξ1/2(Γ0,Γ1) ≺ 1
2 Γ0 +
1 2 Γ1 , (27)
we have:
tr ( CkSξ1/2(Γ0,Γ1) ) < 1 2 tr ( CkSΓ0 ) + 1 2 tr ( CkSΓ1 ) , (28)
which is derived by multiplying both sides of equation 27 with CkS followed by taking the trace on both sides.
Similarly, we can write the operator inequality for {Γ−10 ,Γ −1 1 } using equation 23 as:
ξ1/2(Γ0,Γ1) −1 = ξ1/2(Γ −1 0 ,Γ −1 1 ) ≺
1 2 Γ−10 + 1 2 Γ−11 , (29)
Multiplying both sides of equation 29 by MkS ∈ S++, and applying the trace operator on both sides leads to:
tr ( MkSξ1/2(Γ0,Γ1) −1 ) < 1 2 tr ( MkSΓ0 −1)+ 1 2 tr ( MkSΓ1 −1) . (30) Summing up equation 28 and equation 30 proves equation 26 and concludes the first part of the proof.
B.2 PART II: DETAILED DERIVATION OF THE UPDATE RULES IN EQUATIONS 13 AND 14
We now present the second part of the proof by deriving the update rules in equations 13 and 14. Since the cost function Lconvsource(Γ,Λk) is strictly g-convex, its optimal solution in the k-th iteration
is unique. More concretely, the optimum can be analytically derived by taking the derivative of equation 9 and setting the result to zero as follows:
∇Lconvsource(Γ,Λk) = ( CkS )−1 − Γ−1MkSΓ−1 = 0 , (31)
which results in
Γ ( CkS )−1 Γ = MkS . (32)
This solution is known as the Riccati equation, and is the geometric mean between CkS and M k S (Davis et al., 2007; Bonnabel & Sepulchre, 2009):
Γk+1 = (CkS) 1 2 ( (CkS) −1/2MkS(C k S) −1/2 ) 1 2 (CkS) 1 2 .
The update rule for the full noise covariance matrix can be derived analogously:
Λk+1 = (CkN) 1 2 ( (CkN) −1/2MkN(C k N) −1/2 ) 1 2 (CkN) 1 2 .
Remark 8. Note that the obtained update rules are closed-form solutions for the surrogate cost functions, equations 9 and 11, which stands in contrast to conventional majorization minimization algorithms (see section C in the appendix), which require iterative procedures in each step of the optimization.
Deriving the update rules in equation 13 and equation 14 concludes the second part of the proof of Theorem 2.
C PROOF OF THEOREM 3
In the following, we provide proof for Theorem 3 by showing that alternating update rules for Γ and Λ in equation 13 and equation 14 are guaranteed to converge to a local minimum of the Bayesian Type-II likelihood equation 8. In particular, we will prove that FUN learning is an instance of the general class of majorization-minimization (MM) algorithms, for which this property follows by construction. To this end, we first briefly review theoretical concepts behind the majorizationminimization (MM) algorithmic framework (Hunter & Lange, 2004; Razaviyayn et al., 2013; Jacobson & Fessler, 2007; Wu et al., 2010).
C.1 REQUIRED CONDITIONS FOR MAJORIZATION-MINIMIZATION ALGORITHMS
MM encompasses a family of iterative algorithms for optimizing general non-linear cost functions. The main idea behind MM is to replace the original cost function in each iteration by an upper bound, also known as majorizing function, whose minimum is easy to find. The MM class covers a broad range of common optimization algorithms such as convex-concave procedures (CCCP) and proximal methods (Sun et al., 2017, Section IV), (Mjolsness & Garrett, 1990; Yuille & Rangarajan, 2003; Lipp & Boyd, 2016). Such algorithms have been applied in various domains such as brain source imaging (Hashemi & Haufe, 2018; Bekhti et al., 2018; Cai et al., 2020a; Hashemi et al., 2020), wireless communication systems with massive MIMO technology (Masood et al., 2016; Haghighatshoar & Caire, 2017; Khalilsarai et al., 2020), and non-negative matrix factorization (Fagot et al., 2019). Interested readers are referred to Sun et al. (2017) for an extensive list of applications on MM.
The problem of minimizing a continuous function f(u) within a closed convex set U ⊂ Rn:
min u f(u) subject to u ∈ U , (33)
within the MM framwork can be summarized as follows. First, construct a continuous surrogate function g(u|uk) that majorizes, or upper-bounds, the original function f(u) and coincides with f(u) at a given point uk:
[A1] g(uk|uk) = f(uk) ∀ uk ∈ U [A2] g(u|uk) ≥ f(u) ∀ u,uk ∈ U .
Second, starting from an initial value u0, generate a sequence of feasible points u1,u2, . . . ,uk,uk+1 as solutions of a series of successive simple optimization problems, where
[A3] uk+1 := arg min u∈U g(u|uk) .
If a surrogate function fulfills conditions [A1]–[A3], then the value of the cost function f decreases in each iteration: f(uk+1) ≤ f(uk). For the smooth functions considered in this paper, we further require that the derivatives of the original and surrogate functions coincide at uk:
[A4] ∇g(uk|uk) = ∇f(uk) ∀ uk ∈ U .
We can then formulate the following theorem:
Theorem 9. Assume that an MM algorithm fulfills conditions [A1]–[A4]. Then, every limit point of the sequence of minimizers generated in [A3], is a stationary point of the original optimization problem in equation 33.
Proof. A detailed proof is provided in Razaviyayn et al. (2013, Theorem 1).
C.2 DETAIL DERIVATION OF THE PROOF OF THEOREM 3
We now show that FUN learning is an instance of majorization-minimization as defined above, which fulfills Theorem 9.
Proof. We need to prove that conditions [A1]–[A4] are fulfilled for FUN learning. To this end, we recall the upper bound on log |Σy| in equation 17, which fulfills condition [A2] since it majorizes log |Σy| as a result of the concavity of the log-determinant function and its first-order Taylor expansion around Σky. Besides, it automatically satisfies conditions [A1] and [A4] by construction, because the majorizing function in equation 17 is obtained through a Taylor expansion around Σky. Concretely, [A1] is satisfied because the equality in equation 17 holds for Σy = Σky. Similarly, [A4]
is satisfied because the gradient of log |Σy| at point Σky, ( Σky )−1 defines the linear Taylor approxi-
mation log ∣∣Σky∣∣+ tr [(Σky)−1 (Σy −Σky)]. Thus, both gradients coincide in Σky by construction.
Now, we prove that [A3] can be satisfied by showing that Lconvsource(Γ,Λk) reaches its global minimum in each MM iteration. This is guaranteed if Lconvsource(Γ,Λk) can be shown to be convex or g-convex with respect to Γ. To this end, we first require the subsequent proposition:
Proposition 10. Any local minimum of a g-convex function over a g-convex set is a global minimum.
Proof. A detailed proof is presented in Rapcsak (1991, Theorem 2.1).
Given the proof presented in appendix B.1, we can conclude that equation 20 is g-convex; hence, any local minimum ofLconvsource(Γ,Λk) is a global minimum according to Proposition 10. This proves that condition [A3] is fulfilled and completes the proof that the optimization of equation 8 with respect to Γ using the convex surrogate cost function equation 9 leads to an MM algorithm. For the sake of brevity, we omit the proof for the optimization with respect to Λ based on the convex surrogate function in equation 11, Lconvnoise(Γk,Λ), as it can be presented, analogously.
D DERIVATION OF CHAMPAGNE AS A SPECIAL CASE OF FUN LEARNING
We start the derivation of update rule equation 15 by constraining Γ to the set of diagonal matrices W: Γ = diag(γ), where γ = [γ1, . . . , γN ]>. We continue by rewriting the constrained optimization with respect to the source covariance matrix,
Γk+1 = arg min Γ∈W, Λ=Λk tr(CkSΓ) + tr(M k SΓ −1) , (34)
as follows:
γk+1 = arg min γ, Λ=Λk
diag [( CkS )−1] γ + diag [ MkS ] γ−1︸ ︷︷ ︸
Ldiagsource(γ|γk)
, (35)
where γ−1 = [γ−11 , . . . , γ −1 N ] > is defined as the element-wise inversion of γ. The optimization with respect to the scalar source variances is then carried out by taking the derivative of equation 35 with respect to γn, for n = 1, . . . , N , and setting it to zero,
∂
∂γn
([( CkS )−1] γn + [ MkS ] γ−1n ) = [( CkS )−1]
n,n − 1 (γn)2 [ MkS ] n,n
= 0 for n = 1, . . . , N ,
where Ln denotes the n-th column of the lead field matrix. This yields the following update rule
γk+1n ← √√√√√ [ MkS ] n,n[(
CkS )−1]
n,n
= √√√√ 1T ∑Tt=1(xkn(t))2 L>n ( Σky )−1 Ln for n = 1, . . . , N ,
which is identical to the update rule of Champagne (Wipf & Nagarajan, 2009).
E DERIVATION OF CHAMPAGNE WITH HETEROSCEDASTIC NOISE LEARNING
AS A SPECIAL CASE OF FUN LEARNING
Similar to Appendix D, we start by constraining Λ to the set of diagonal matricesW: Λ = diag(λ), where λ = [λ1, . . . , λM ]>. We continue by reformulating the constrained optimization with respect to the noise covariance matrix,
Λk+1 = arg min Λ∈W, Γ=Γk tr(CkNΛ) + tr(M k NΛ −1) , (36)
as follows:
λk+1 = arg min λ, Γ=Γk
diag [( CkN )−1] λ + diag [ MkN ] λ−1︸ ︷︷ ︸
Ldiagnoise(λ|λk)
, (37)
where λ−1 = [λ−11 , . . . , λ −1 M ] > is defined as the element-wise inversion of λ. The optimization with respect to the scalar noise variances then proceeds by taking the derivative of equation 37 with respect to λm, for m = 1, . . . ,M , and setting it to zero,
∂
∂λm
([( CkN )−1] λm + [ MkN ] λ−1m ) = [( CkN )−1]
m,m − 1 (λm)2 [ MkN ] m,m
= 0 for m = 1, . . . ,M .
This yields the following update rule:
λk+1m ← √√√√√ [ MkN ] m,m[(
CkN )−1]
m,m
= √√√√√√ [ 1 T ∑T t=1(y(t)− Lxk(t))(y(t)− Lxk(t))> ] m,m[(
Σky )−1]
m,m
for m = 1, . . . ,M , (38)
which is identical to the update rule of the Champagne with heteroscedastic noise learning as presented in Cai et al. (2020a).
F u
lls tr
u c tu
re N
o is e H e te ro s c e d a s ti c N o is e
Figure 4: Accuracy of the noise covariance matrix reconstruction incurred by three different noise learning approaches assuming homoscedastic (red), heteroscedastic (green) and full-structure (blue) noise covariances. The ground-truth noise covariance matrix is either full-structure (upper row) or heteroscedastic diagonal (lower row). Performance is assessed in terms of the Pearson correlation between the entries of the original and reconstructed noise covariance matrices, Λ and Λ̂, denoted by Λsim (left column). Shown is the similarity error 1 − Λsim. Further, the normalized mean squared error (NMSE) between Λ and Λ̂, defined as NMSE = ||Λ̂−Λ||2F /||Λ||2F is reported (right column).
F PSEUDO-EEG SIGNAL GENERATION
Our simulation setting is an adoption of the EEG inverse problem, where brain activity is to be reconstructed from simulated pseudo-EEG data (Haufe & Ewald, 2016).
Forward Modeling: Populations of pyramidal neurons in the cortical gray matter are known to be the main drivers of the EEG signal (Hämäläinen et al., 1993; Baillet et al., 2001). Here, we use a realistic volume conductor model of the human head to model the linear relationship between primary electrical source currents generated within these populations and the resulting scalp surface potentials captured by EEG electrodes. The lead field matrix, L ∈ R58×2004, was generated using the New York Head model (Huang et al., 2016) taking into account the realistic anatomy and electrical tissue conductivities of an average human head. In this model, 2004 dipolar current sources were placed evenly on the cortical surface and 58 sensors were considered. The lead field matrix, L ∈ R58×2004 was computed using the finite element method. Note that the orientation of all source currents was fixed to be perpendicular to the cortical surface, so that only scalar source amplitudes needed to be estimated.
Evaluation Metrics: Source reconstruction performance was evaluated according to the following metrics. First, the earth mover’s distance (EMD) (Rubner et al., 2000; Haufe et al., 2008)) was used to quantify the spatial localization accuracy. The EMD measures the cost needed to transform two probability distributions defined on the same metric domain (in this case, distributions of the true and estimated sources defined in 3D Euclidean brain space) into each other. EMD scores were normalized to [0, 1]. Second, the error in the reconstruction of the source time courses was measured. To this end, Pearson correlation between all pairs of simulated and reconstructed (i.e., those with non-zero activations) source time courses was assessed as the mean of the absolute correlations obtained for each source, after optimally matching simulated and reconstructed sources based on maximal absolute correlation. We also report another metric for evaluating the localization error as the average Euclidean distance (EUCL) (in mm) between each simulated source and the best (in terms of absolute correlations) matching reconstructed source. For assessing the recovery of the true support, we also compute F1-measure scores (Chinchor & Sundheim, 1993; van Rijsbergen, 1979): F1 = 2×TP/P+TP+FP , where P denotes the number of true active sources, while TP and FP are the numbers of true and false positive predictions. Note that perfect support recovery, i.e., F1 = 1, is only achieved when there is a perfect correspondence between ground-truth and estimated support.
To evaluate the accuracy of the noise covariance matrix estimation, the following two metrics were calculated: the Pearson correlation between the original and reconstructed noise covariance matrices, Λ and Λ̂, denoted by Λsim, and the normalized mean squared error (NMSE) between Λ and Λ̂, defined as: NMSE = ||Λ̂ − Λ||2F /||Λ||2F . Similarity error was then defined as one minus the Pearson correlation: 1 − Λsim. Note that NMSE measures the reconstruction of the true scale of the noise covariance matrix, while Λsim is scale-invariant and hence only quantifies the overall structural similarity between simulated and estimated noise covariance matrices.
Evaluating the accuracy of the noise covariance matrix estimation: Figure 4 depicts the accuracy with which the covariance matrix is reconstructed by three different noise learning approaches assuming noise with homoscedastic (red), heteroscedastic (green) and full (blue) structure. The ground truth noise covariance matrix either had full (upper row) or heteroscedastic (lower row) structure. Performance was measured in terms of similarity error and NMSE. Similar to the trend observed in Figure 2, full-structure noise learning leads to better noise covariance estimation accuracy (lower NMSE and similariy error) for the full-structure noise model, while superior reconstruction performance is achieved for heteroscedastic noise learning when true noise covariance is heteroscedastic.
G FURTHER DETAILS ON AUDITORY EVOKED FIELDS (AEF) DATASET
The MEG data used in this article were acquired in the Biomagnetic Imaging Laboratory at the University of California San Francisco (UCSF) with a CTF Omega 2000 whole-head MEG system from VSM MedTech (Coquitlam, BC, Canada) with 1200 Hz sampling rate. The lead field for each subject was calculated with NUTMEG (Dalal et al., 2004) using a single-sphere head model (two spherical orientation lead fields) and an 8 mm voxel grid. Each column was normalized to have a norm of unity. The neural responses of one subject to an Auditory Evoked Fields (AEF) stimulus were localized. The AEF response was elicited with single 600 ms duration tones (1 kHz) presented binaurally. 120 trials were collected for AEF dataset. The data were first digitally filtered from 1 to 70 Hz to remove artifacts and DC offset, time-aligned to the stimulus, and then averaged across the following number of trials:{1,2,12, 63,120}. The pre-stimulus window was selected to be 100 ms to 5 ms and the post-stimulus time window was selected to be 60 ms to 180 ms, where 0 ms is the onset of the tone (Wipf et al., 2010; Dalal et al., 2011; Owen et al., 2012; Cai et al., 2019a). | 1. What is the focus of the paper, and what are the authors' key contributions?
2. What are the strengths of the proposed method, particularly in terms of its mathematical formulation and algorithmic aspects?
3. Are there any concerns or limitations regarding the novelty of the approach, considering previous works in the field?
4. How does the reviewer assess the clarity, quality, and reproducibility of the paper's content, including the abstract, experiments, and software implementation?
5. What additional information or improvements would strengthen the paper's contribution and value to the community? | Review | Review
Joint Learning of Full-structure Noise in Hierarchical Bayesian Regression Models
Summary:
The paper argues that modeling the full covariance structure in a sparse bayessian learning setting leads to significantly better results in eeg inverse problems. The paper details a majorization-minimization type algorithm leading to a set of fairly simple update rules. The proposed method is evaluated on simulated data.
Positive:
The proposed method is well motivated and the problem is highly relevant.
The mathematical details regarding the algorithm are presented in sufficient detail.
The paper is well written and easy to follow for the most part.
Experiments are reasonable and presented clearly.
Negative:
The abstract could be improved to more clearly describe the problem and contributions of the paper in a self-contained manner.
Has this particular problem (sparse bayesian regression with full covariance noise) not been considered by others? The main contribution, in my view, is algorithmic; which other algorithms have been used previously for this type of problem? (I think both ML-II and MCMC and possibly other methods have previously been used.) I would have liked a review and experimental comparison.
While the experimental evaluation is reasonable, I think the paper would benefit from a demonstration and benchmarking with competing approaches on a real data task.
Experiments on simulated data highlighting more clearly the algorithmic advantages of the proposed method would be appreciated.
I did not notice a link to software implementing the proposed method? Sharing software implementations will significantly strengthen the contribution and allow the community to reproduce the results.
Recommendation:
Weak reject. |
ICLR | Title
Monkeypox with Cross Infection Hypothesis via Epidemiological Mode
Abstract
A new re-emerging infectious disease of monkeypox 2022 is structurally related to smallpox that is induced by the monkeypox viruses and has caused 59,606 active cases with 18 deaths up to September 15, 2022. To end this ongoing epidemic, there is a need for population-wide control policies like reducing social interaction by keeping social distance, treatment of infected individuals, and restriction on animals, etc. We forecast the progression of the epidemic and come up with an efficient control mechanism by formulating a mathematical model. The biological feasibility and dynamical behavior of the proposed model are then investigated together with sensitivity analysis to obtain the effect of various epidemic parameters mitigating the spread of the disease. Subsequently, by taking non-pharmaceutical and pharmaceutical intervention strategies as control measures, an optimal control theory is applied to mitigate the fatality of the disease to minimize the infectious population and reduce the cost of controls, we construct an objective functional and solve it by using Pontryagin’s maximum principle. Finally, extensive numerical simulations are performed to show the impact of the application of intervention mechanisms in controlling the transmission of the monkeypox epidemic.
1 INTRODUCTION
The infection of monkeypox is a contagious disease resulting from the orthopoxvirus. This infection is zoonotic and was initially transported to humans by wild rodents in central and western Africa. But human-to-human spread (horizontal transmission) is also possible, particularly within the family home or in the context of care (Farahat et al., 2022). The monkeypox viruses can be diffused by immediate contact with lesions on the skin or mucous membranes of a sick person, as well as by droplets (sneezing, saliva, sputters, etc.) (Singh et al., 2021). Generally, an individual can become infected through contact with patient’s environment. It is, therefore, important that patients respect isolation measures throughout the illness. Humans can also become infected through active contact with animals (rodents and monkeys) (Oladoye, 2021). Usually, the monkeypox infection starts from fever, headaches, body aches, weakness, etc. (Deresinski, 2022). The symptoms may lead to the appearance of a blistering rash consisting of fluid-filled blisters that progress to dryness and crusting, then scarring and itching after two days. The bubbles are most concentrated on the face, the forehands, and the feet soles. The disease is more severe in children as well as those who have weak immune systems. Historically, monkeypox was identified first in the 1970s, but recently the reemerging of the disease, cases are reported in various countries around the globe (ASSESSMENT, 2022). Usually monkeypox virus transmits from human interaction, but there is a significant risk of cross-infection (animal-to-human) spread (Petersen et al., 2019). Therefore, the hypothesis of cross-infection between human and animals play a significant role and can not be neglected.
Modeling and forecasting with the aid of dynamical system is a challenging domain in various discipline, e.g., infectious disease epidemiology (Brauer, 2017; Saravanakumar et al., 2020; Guo et al., 2020), health sciences (Choi et al., 2016), and various other fields of applied science and technology (Rolnick et al., 2022), and therefore attracted the considerable attention of researchers, see for instance, (Das et al., 2020b; Yin et al., 2021; Saha et al., 2021). Similarly, various models demonstrate different outlooks regarding the dynamical behavior of an epidemic (Busenberg & Cooke, 2012; Khajanchi et al., 2018; Das et al., 2020a). With the aim of these mathematical models, researchers want to understand the dynamics of a disease and then suggest control strategies to control or completely eradicate the infection (Chen & Guo, 2016; Kumar et al., 2019). Besides the rich literature on
infectious disease epidemiology, there have been no enough studies found to represent the temporal dynamics of monkeypox 2022, to the best of our knowledge. We try to formulate a model which describes the transmission of monkeypox 2022 to understand the dynamics and suggest a control mechanism with the aid of optimal control theory. We summarize our contributions in this work as follows:
• The cross infection between humans and animals plays a significant role in the dynamics of monkeypox virus transmission. We, therefore, propose a model based on the hypothesis of cross-infection between humans and animals. The model has two blocks: humans and animals.
• The first block describes the evolution of monkeypox in the human population, while the second block represents the evolution of the monkeypox virus among animals.
• Four time-dependent control measures are then introduced in the model to demonstrate the utilization of optimal control measures: to minimize the infectious individuals and maximize the recovered population. Particularly, reducing the risk of disease transmission by educating people to rise awareness of risk factors, treatment of infected individuals, and restrictions on animals.
2 RELATED WORK
The analysis of infectious diseases with the aid of dynamical systems is a fascinating outlook to predict the dynamics of an epidemic. In the history of infectious disease epidemiology, Kermack and McKendrick were the pioneers to develop the three-population-group epidemiological model (susceptible-infectious-recovered) (Kermack & McKendrick, 1927), where various population groups are employed to signify the infection, demonstrating their progression and interactions. The classical susceptible-infectious-recovered model formulated by Kermack and McKendrick has been modified by incorporating an exposed compartment known as the susceptibleexposed-infectious-recovered model (Anderson & May, 1979), which is also extensively used to delineate the transmission of distinct diseases. Data-driven modeling methods have been also used to investigate the transmission dynamics of infectious diseases (Heesterbeek et al., 2015). It is worthy to mention that the idea of the classical susceptible-infectious-removed model has been further investigated by various researchers to observe the transmission dynamics of distinct epidemics (see for instance Flaxman et al. (2020); Samui et al. (2020); Britton et al. (2020)). Optimal control theory has been extensively used and is very common in infectious disease epidemiology (Rohith & Devika, 2020; Khajanchi et al., 2021). The adjustment of epidemic parameters in a feasible way, by taking the limits on the system to optimize a given function, can be applied with the help of control theory. Both non-pharmaceutical and pharmaceutical control measures can be used to control the infection. Especially, the non-pharmaceutical intervention strategies play a key role. Usually, with the help of optimal control analysis, we are able to know how to eradicate the disease.
A re-emerging infectious disease of monkeypox 2022, was reported in May 2022. Here, we are interested to formulate a model by taking an extended susceptible–infected–recovered-type model with two compartmental blocks: humans and animals. We then discuss the qualitative analysis of the proposed two-strained model. Further, applying the theory of optimal control to understand the progression of the monkeypox virus transmission. Since it is not merely a medical problem, regarding a public health concern, both the combination of non-pharmaceutical and pharmaceutical intervention can be taken into account to propose a control mechanism for the control of monkeypox virus transmission.
3 THE MODEL
We propose an epidemiological model for the dynamics of monkeypox based on the cross infection hypothesis: animal to human and human to human. The various compartmental population of the model divided into two blocks: human and animal. The first block represents the evolution of the human population, consequently distributed into three epidemiological groups: sensitive individuals, infected by monkeypox, and recovered individuals, while the second block represents the evolution of animals, divided into two classes: susceptible animals and infected animals. To symbolize the population groups, let us assume that Sh(t) represents the sensitive individuals at time t, which are not infected but have a chance to be infected at time t+∆t (∆t is the small increment in time). Mh denotes the individual infected with monkeypox, and Rh is the recovered individuals. Similarly, the susceptible and infected animals are denoted by Sa(t) and Ia(t), respectively. Due to the assumption of a homogeneously mixed population for the successful transmission of the monkeypox virus, the risky humans will enter the infected human compartment at a rate β, as well as, the susceptible animal getting infected will move to the infected animal at a rate ϕ. The individual leaves the infected human class only after they fully recover or die. The recovered individuals will enter Rh. Moreover, the complete geometry of the epidemic problem is described by Figure 1, and thus, the evolution of the disease is represented by the following deterministic system of differential equations: dSh(t) dt = Φh − βMh(t)Sh(t)− ξMa(t)Sh(t)− ϑSh(t), dMh(t) dt = βMh(t)Sh(t) + ξMa(t)Sh(t)− (ϑ+ ϑ1 + r)Mh(t), dRh(t) dt = rMh(t)− ϑRh(t), dSa(t)
dt = Φa − ϕMa(t)Sa(t)− αSa(t),
dMa(t)
dt = ϕMa(t)Sa(t)− (α+ α1)Ma(t),
(1) with biologically feasible non-negative initial population sizes
Sh(0) > 0, Mh(0) ≥ 0, Rh(0) ≥ 0, Sa(0) > 0, Ma(0) ≥ 0. (2)
In the above epidemic problem (1)-(2), the parameters are described as: the newborn of human and animal are denoted by Φh and Φa, respectively, while the monkeypox virus transmission rates are β and ξ. The natural death rate of a human is assumed to be ϑ, while the same ratio for the animal population is denoted by α. The monkeypox virus transmission rate from one animal to another is assumed to be ϕ, and α1 is the death rate that arises from the infection of monkeypox virus in the animal population. Moreover, the disease-induced death rate of a human is represented by ϑ1, and r is the recovery rate of an infected human.
To proceed, first, we show the mathematical, as well as, the biological feasibility of the proposed epidemic problem. To this end, we show the following result.
Proposition 1 The solution of the model (1)-(2) is positive and bounded.
3.1 DYNAMICAL ANALYSIS
In this section, we discuss the temporal dynamics of the model to find the stability conditions for the monkeybox epidemic model. We find the monkeypox-free equilibrium state for the developed model (1) and calculate the reproductive number. Let W0 is the monkeypox-free equilibrium of the model, then, W0 = ( Φh ϑ , 0, 0, Φa α , 0 ) . Moreover, the reproductive parameter, denoted by Ro,
Parameters Indices % Increase or Decrease Impact on Ro
represents the average of the secondary infectious produced by an infective whenever put into a sensitive/susceptible individual. To calculate this quantity for the model reported in Eq.(1) by following (Van den Driessche & Watmough, 2002), let us assume that X = (Mh,Ma)⊤, then
dX
dt = F − V,
where, F and V are the 2 by 2 variational matrices at the monkeypox-free equilibrium defined as
F =
( βΦh ϑ ξΦa α
0 ϕΦaα
) , V = ( ϑ+ ϑ1 + r 0
0 α+ α1
) .
The reproductive number is the spectral radius of the matrix FV −1 and takes the following form
Ro = Rh +Ra, Rh = βΦh
ϑ (ϑ+ ϑ1 + r) , Ra = ϕΦa α (α+ α1) .
Since the reproductive number is the expected average number of secondary infections. It is concluded that whenever Ro < 1 the disease will die out, otherwise spread if Ro > 1.
3.2 BIOLOGICAL INTERPRETATION OF THE BASIC REPRODUCTIVE NUMBER
Since the initial spread of any epidemic is related to the reproductive number, we analyze the normalized sensitivity of the proposed system parameters. The sensitivity of the threshold quantity will enable us to recognize the most sensitive and effective parameters for disease transmission, because a small perturbation in the most sensitive parameter can produce a great influence on the associated epidemic model. To present the prediction for the prevalence of the monkeypox disease, reduction, and persistence in the transmission of infection, we perform sensitivity analysis of the basic reproductive number Ro. Let us assume that γ is any epidemic parameter, then the normalized forward sensitivity co-efficient (index) related to the basic reproductive number Ro is defined by:
ΥRoγ = ∂Ro ∂γ × γ Ro .
It is clear from the formula of normalized sensitivity coefficient that it may be dependent or independent of the model parameters. We calculate the associated sensitivity indices accordingly listed in Table 1. It can be observed that some of the indices of the model parameters are negative and some are positive. The negative and positive signs demonstrate that the perturbation to these parameters can produce a decrease or increase in the value of the basic reproductive number, respectively. For example, the forward sensitivity index of the parameter β is ΥRoβ = 0.3677, which indicates that if we increase the value of β by 10%, as a result, the value of the basic reproductive number Ro would increase by 3.677%. Similarly, the forward sensitivity indices of ϑ1 and ϕ are 0.0288 and 0.6322, respectively, which implies that if we perturb the value of ϑ1 and ϕ by 10% it would result in increase or decrease in the value of the basic reproductive number by 0.288% and 6.322%, respectively. On the other hand, the sensitivity indices of r, α, and α1 are negatively associated with the basic reproductive number, i.e., increasing their values would decrease the value of Ro. If we increase the value of r, α, and α1 by 10% it casts the decrease of 15.961% in the value of Ro. In this analysis, we observed that the most effective parameters of the proposed epidemic problem are β, r, ϕ, α1, and α, therefore, special attention is required for the parameters with highly sensitive indices to forecast the transmission of monkeypox disease. We now state the dynamics of the model by proving the following results.
Theorem 1 If Ro < 1, then the dynamical system (1) is locally and globally asymptotically stable around the monkeypox-free equilibrium state of the model and unstable if it is greater than unity.
4 OPTIMAL CONTROL
The application of optimal control theory is one of the important theoretical analyses associated with infectious diseases. We use this tool to produce the proper control mechanism for eliminating infection of monkeypox virus transmission. Our analysis is not limited to the theoretical analysis, and we also perform some numerical experiments to show the effect of the proposed control strategies on the dynamics of monkeypox virus transmission. The key goal is to reduce the infected humans and animals while maximizing the recovered human using the optimal control theory. The parameters with certain assumptions lead to the monkeypox model as described by Eq.(1), which is a coupled system with five state variables (Sh(t),Mh(t), Rh(t), Sa(t),Ma(t)). We introduce four control measures µi(t) (i = 1, 2, . . . , 4) that control the number of risky and infected individuals externally over a given time frame.
4.1 REDUCING THE RISK OF HUMAN TO HUMAN AND ZOONOTIC TRANSMISSION
The main prevention measure for monkeypox is educating people to rise awareness of risk factors about the control measures they can take for the reduction of the virus transmission. Due to sufficient information, the population needs to maintain social distance, wear masks, follow the strategy of home isolation, etc. Thus, we introduce the control factors (1 − η1µ1(t)) and (1 − η2µ2(t)) to control the interaction between susceptible/risky human Sh(t) with infected human Mh(t) and animal Ma(t), which represent the depletion in β and ξ, respectively, while η1 and η2 compute the usefulness of the control measure µ1(t) and µ2(t) (where µ1(t), µ2(t) ∈ [0, 1]), respectively. The most successful framework is µ1(t) ≡ 1 ≡ µ2(t), which indicates that when the interaction of susceptible humans with infected humans and animals is almost perfectly avoided it makes the transmission of the disease to zero. Here, µ1(t) ≡ 1 ≡ µ2(t) means fully response by implementing the given control mechanism, while µ1(t) ≡ 0 ≡ µ2(t) implies no response. The intensities of the responses are associated with the behavior of the human population, and so these response intensities are represented by µ1(t) and µ2(t) as control measures. We maximize the responses using isolation so that they change their behavior and the cost will correspond to a nonlinear function of µ1(t) and µ2(t). Thus, we wish to find the optimal response for risky individuals with the help of isolation as a control measure.
4.2 TREATMENT FOR INFECTED INDIVIDUALS
The treatment of infected individuals not only controls the number of the infected individuals but also influences its development. Although there is no proper treatment for the monkeypox virus infection, smallpox and monkeypox are genetically similar, and there are antiviral drugs against smallpox that can be used for treatment purposes. So in the present scenario, we assume the accessibility of treatment for the infected population. We introduce the term −η3µ3(t)Mh(t) as a treatment in the proposed model, where η3 represents the treatment rate associated with the intensity µ3(t). There are various costs associated with given medication, so we assume that the intensity of treatment control measure µ3(t) lies between 0 and unity. The control µ3(t) will attempt to change the fraction of the infected population to the recovered population.
4.3 RESTRICTION ON ANIMALS
While it seems to be difficult that how to restrict animals from the transmission of the monkeypox infection, it is possible. In the current situation, various countries have restricted the importation of animals (rodents) and non-human primates. Animals that are infected with monkeypox should be placed into quarantine immediately to be isolated from other animals. Moreover, an animal that has close contact with another infected animal should be also isolated and accordingly quarantined to observe the symptoms of monkeypox for 30 days. We introduce the control factor (1− η4µ4(t)) to control the interactions of susceptible and infected animals, where µ4(t) ∈ [0, 1].
In this section, the main objective is to obtain the optimal control strategy that minimizes the infected population with the aid of the above control measures and with the minimum associated cost. Thus, the admissible set of control measures µi(t) is defined by
U = {µi(t), i = 1, 2, . . . , 4 : 0 ≤ µi(t) ≤ 1, t ∈ [0, T ]} . We, therefore, develop the control problem by keeping in view the above strategies with the objective functional W ({µi}) to be minimized:
W (µi(t), i = 1, . . . , 4) = ∫ T 0 h1Mh(t)dt+ 1 2 ∫ T 0 4∑ i=1 κiµ 2 i (t)dt, (3)
subject to dSh(t)
dt = Φh − β {1− η1µ1(t)}Mh(t)Sh(t)− ξ {1− η2µ2(t)}Ma(t)Sh(t)− ϑSh(t),
dMh(t)
dt = β {1− η1µ1(t)}Mh(t)Sh(t) + ξ {1− η2µ2(t)}Ma(t)Sh(t)
− {ϑ+ ϑ1 + r + η3µ3(t)}Mh(t), dRh(t)
dt = {r + η3µ3(t)}Mh(t)− ϑRh(t),
dSa(t)
dt = Φa − ϕ {1− η4µ4(t)}Ma(t)Sa(t)− αSa(t),
dMa(t)
dt = ϕ {1− η4µ4(t)}Ma(t)Sa(t)− {α+ α1}Ma(t),
(4)
with the initial population sizes in Eq. (2). In Eq. (3), the integrand represents the value of cost at time t, while the function W shows the sum of the cost described by the integrand or the total incurred cost. The parameters h1 and κi’s are non-negative parameters that are weight constants to balance the units of the integrand. The control measures µ∗i (i = 1, 2, 3, 4) exist in the admissible control set U that minimize W . We now discuss the existence of optimal control for our proposed control problem (4), then use the well-known Pontryagin’s maximum principle for characterization and getting the necessary conditions of the optimal controls. The following result will be presented to ensure the existence of µ∗i that minimizes the function W .
Theorem 2 There exist optimal controls µ∗(t) = (µ1(t), µ2(t), µ3(t), µ4(t)) in U that minimize the objective function W associated with the control problem in Eqs.(4)–(3).
Since the above result ensures the existence of the controls to minimize the objective functional (3) subject to the state system (4), we then derive the necessary conditions for characterization of the optimal control problem using Pontryagin’s maximum principle, see Theorem 3 in the appendix.
5 NUMERICAL EXPERIMENTS
We perform numerical experiments to test the model predictions and verify the analytical findings. We utilize a well-known numerical procedure of the Runge-Kutta method of the 4th order. First, we discretize the model and develop the algorithm to perform the numerical simulations.
5.1 DISCRITIZATION
To discritize the model, we set
X = Sh Mh Rh Sa Ma , F = Φh − βMhSh − ξMaSh − ϑSh βMhSh + ξMaSh − (ϑ+ ϑ1 + r)Mh rMh − ϑRh Φa − ϕMaSa − αSa ϕMaSa − (α+ α1)Ma ,
Y = φ1 φ2 φ3 φ4 φ5 , G = Φh − β {1− η1µ1}MhSh − ξ {1− η2µ2}MaSh − ϑSh β {1− η1µ1}MhSh + ξ {1− η2µ2}MaSh − {ϑ+ ϑ1 + r + η3µ3}Mh {r + η3µ3}Mh − ϑRh Φa − ϕ {1− η4µ4}MaSa − αSa ϕ {1− η4µ4}MaSa − {α+ α1}Ma ,
and
H = {φ1 − φ2} {β (1− η1µ ∗ 1)M ∗ h + ξ(1− η2µ∗3)Ma}+ φ∗1ϑ,
{φ1 − φ2} {β (1− η1µ∗1)S∗h} − {ϑ+ ϑ1 + r + η3µ∗3}φ2 − {r + µ∗3}φ3 − h1, ϑφ3, {φ4 − φ5} {ϕ (1− η4µ∗4)Ma} − αφ4, {φ1 − φ2} {ξ (1− η2µ∗2)S∗h}+ {α+ α1}φ5 + {φ4 − φ5} {ϕ (1− η4µ∗4)S∗a} , , then Eq.(1), Eq.4), and Eq.(6) can be re-casted as
dX(t)
dt = F (t,X(t)),
dX(t, µ)
dt = G(t,X(µ)),
dY (t)
dt = H(t, φ). (5)
The application of forward and backward Runge-Kutta method of order four gives
Xi+1 = Xi + l
6 (h1 + 2h2 + 2h3 + h4) , Yi−1 = Yi −
l 6 (k1 + 2k2 + 2k3 + k4) , (6)
where
h1 = F (tn, Xn), h2 = F ( tn + l
2 , Xn + lh1 2
) , h3 = F ( tn + l
2 , Xn + lh2 2
) ,
h4 = F (tn + h,Xn + lh3) , k1 = F (tn, Xn), k2 = F ( tn − l
2 , Xn − lk1 2
) ,
k3 = F ( tn − l
2 , Xn − lk2 2
) , k4 = F (tn − k,Xn − lk3) .
Thus the rest of algorithms can be concluded as:
Algorithm 1 Runge-Kutta Method (RK4) 1: Input: Endpoints t0, tmax, integer n, parametric values, initial conditions 2: Output: approximation Sh, Mh, Rh, Sa, Ma at (n+ 1) values of t 3: Parameters and Initial Conditions: Setting the values of epidemic parameters and initial sizes
for compartmental populations 4: for i = 1, · · · , n do 5: Recursive Formula: Xi+1 for both control and without control system as given in Eq.(5) and Eq.(6) 6: end for 7: for i = 1, · · · , n, j = n+ 2− i do 8: Recursive Formula: Yi−1 as given in Eq.(5) and Eq.(6) 9: end for
10: Optimal Control: Plugging optimal control variables as given by Eq.(7) 11: Output: ( ti, S i+1 h ,M i+1 h , R i+1 h , S i+1 a ,M i+1 a )
5.2 DISCUSSION
We perform numerical simulations to discuss sensitivity analysis and the application of optimal control strategies. We conduct numerical experiments to present the validation of our theoretical findings for the model parameters and initial sizes of populations as specified in Table 2. It could be noted that some of the parameters are directly correlated to the basic reproductive number, Ro, while some are negatively correlated, as shown in Figure 2. Figure 3 represents the contour plot that describes the dependency of the basic reproductive number, Ro, on β (disease transmission co-efficient of humans) and ϕ (disease transmission co-efficient of animals); β and ϑ (natural death rate); β and r (recovery rate of infected population); and ϕ and α1 (the death rate arises from monkeypox in animal population). It is very much clear from Table 1 and Figure 2 that the epidemic parameters, namely, β and ϕ have positive indices, and are also associated with the susceptible population. If we increase the value of β and ϕ, it will increase the value of basic reproductive number Ro and cross the value Ro = 1, which leads to a substantial outbreak of the monkeypox virus transmission. To maintain the value of the basic reproductive number and deduce a favorable
control method to eradicate the spread of monkeypox virus transmission, we need to restrict the value of these parameters, which is possible with the help of contact tracing and maintaining social distancing. On the other hand, the parameters with negative indices are ϑ, r, α, and α1 as shown in Figure 2, while the relative effect is reported in Figure 3. Moreover, whenever the value of those parameters having negative indices increases, the basic reproductive number Ro will decrease, and if its value becomes less than unity, the infection will no longer persist. So, treatment of infected humans through medication has been incorporated with the aid of optimal control to eliminate the monkeypox virus. To observe the influence of intervention strategies on the transmission dynamics of monkeypox, we perform the numerical simulations of the proposed optimal control problem with the help of the Runge-Kutta (RK4) scheme, as concluded in Algorithm 1. Moreover, the time frame is taken to be 20 units, and the value of parameters are borrowed from Table 2 while investigating and implementing the optimal control mechanism. To investigate the effect of optimal control measures for the monkeypox virus transmission, we execute the proposed problem in two folds: without control µ1(t) = µ2(t) = µ3(t) = µ4(t) = 0 and with the combination of four controls (µ1(t), µ2(t), µ3(t), µ4(t)) which leads to the results as presented in Figure 4. These graphs respectively demonstrate the dynamics of human and animal compartments of the proposed model under no control and with controls. The black dashed and red dashed curves respectively represent the dynamics of each compartmental population with and without the utilization of control strategies to highlight the effect of optimal policies implementation, see Figure 4. A significant reduction in the infected population, as well as, an increase in the non-infected population can be seen with the application of optimal control measures, whenever, compared without intervention strategies. We conclude that the combination of suggested optimal control measures can achieve a significant reduction in monkeypox virus transmission whenever applied in a true sense.
6 CONCLUSION
To keep the current scenario of the monkeypox virus transmission in our mind, we investigated and proposed a model for the dynamics of monkeypox virus transmission. Both theoretical and numerical analyses of the proposed epidemic problem have been studied with the aid of stability theory. Positivity, as well as, the boundedness of the states of the epidemic problem guarantees that the considered model is a well-defined dynamical system. We performed the local and global dynamics of the model and derived the stability conditions. The proposed model has several epidemic parameters and so with the help of normalized sensitivity analysis, the most significant parameters are quantified. It could be observed that disease transmission from both human-to-human and animals to human play a significant role in disease transmission. Besides the disease transmission rate, the parameter r is also very effective and has a major effect on infected individuals and Ro. Moreover, we modified the proposed model by incorporating optimal measures with the aid of control theory to mitigate the monkeypox disease burden and describe the effect of control policies implementation, and as a result to stop the disease from spreading. The findings of our optimal control problem show that the maintenance of the social distancing, tracing contact, transportation of animals and treatment of infected individuals may support mitigating monkeypox virus spreading in the current scenario of the epidemic.
Given the increased development of fractional calculus, many operators of fractional orders were introduced to capture more valuable information. In our future work, we will generalize the proposed model to its associated fractional version to discuss the dynamics of monkeypox virus disease. | 1. What is the focus of the paper regarding Monkeypox spread dynamics?
2. Are there any concerns regarding the similarity between the current paper and a previous work?
3. How does the reviewer assess the contributions, introduction, proposed models, and major listed contributions of the paper compared to another related paper?
4. Do the reviewer have any questions or concerns regarding the paper's content, specifically the similarities with another paper?
5. How does the reviewer evaluate the clarity, quality, novelty, and reproducibility of the paper? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper introduces a compartmental model which allows for cross infections between animals and humans. The model is used to study the dynamics of the Monkeypox spread.
I have major concerns for this paper to be almost a double submission of the paper http://www.aimspress.com/article/doi/10.3934/mbe.2022633 (Stochastic modeling of the Monkeypox 2022 epidemic with cross-infection hypothesis in a highly disturbed environment). Now it could be a case that it is not but then there are too many similarities between both papers
Abstract reads almost the same
Introduction is more or less same (in both points I am talking about literal word to word copies).
Proposed models are same if you account for description in the appendix
Even the major listed contributions are same for both papers
Figure 1 is more or less identical.
Now a case can be made that first paper is essentially the model and this second paper is mainly about the optimal control. However, I doubt that as even in first paper authors talk about scenarios which are akin to the optimal control setting here and importantly in words of authors itself, they list their main contributions as:-
"
* The cross infection between humans and animals plays a significant role in the dynamics
of monkeypox virus transmission. We, therefore, propose a model based on the hypothesis
of cross-infection between humans and animals. The model has two blocks: humans and
animals.
• The first block describes the evolution of monkeypox in the human population, while the
second block represents the evolution of the monkeypox virus among animals.
• Four time-dependent control measures are then introduced in the model to demonstrate
the utilization of optimal control measures: to minimize the infectious individuals and
maximize the recovered population. Particularly, reducing the risk of disease transmission
by educating people to rise awareness of risk factors, treatment of infected individuals, and
restrictions on animals.
"
Now first two can't be contributions of this paper in presence of the first paper I am talking about. Also quite mysteriously the first paper is nowhere cited in the main paper which raises even more concern.
I will finish my review here and hope that authors prove me wrong and their rebuttal makes me look at paper differently to evaluate other points.
Strengths And Weaknesses
look above
Clarity, Quality, Novelty And Reproducibility
look above |
ICLR | Title
Monkeypox with Cross Infection Hypothesis via Epidemiological Mode
Abstract
A new re-emerging infectious disease of monkeypox 2022 is structurally related to smallpox that is induced by the monkeypox viruses and has caused 59,606 active cases with 18 deaths up to September 15, 2022. To end this ongoing epidemic, there is a need for population-wide control policies like reducing social interaction by keeping social distance, treatment of infected individuals, and restriction on animals, etc. We forecast the progression of the epidemic and come up with an efficient control mechanism by formulating a mathematical model. The biological feasibility and dynamical behavior of the proposed model are then investigated together with sensitivity analysis to obtain the effect of various epidemic parameters mitigating the spread of the disease. Subsequently, by taking non-pharmaceutical and pharmaceutical intervention strategies as control measures, an optimal control theory is applied to mitigate the fatality of the disease to minimize the infectious population and reduce the cost of controls, we construct an objective functional and solve it by using Pontryagin’s maximum principle. Finally, extensive numerical simulations are performed to show the impact of the application of intervention mechanisms in controlling the transmission of the monkeypox epidemic.
1 INTRODUCTION
The infection of monkeypox is a contagious disease resulting from the orthopoxvirus. This infection is zoonotic and was initially transported to humans by wild rodents in central and western Africa. But human-to-human spread (horizontal transmission) is also possible, particularly within the family home or in the context of care (Farahat et al., 2022). The monkeypox viruses can be diffused by immediate contact with lesions on the skin or mucous membranes of a sick person, as well as by droplets (sneezing, saliva, sputters, etc.) (Singh et al., 2021). Generally, an individual can become infected through contact with patient’s environment. It is, therefore, important that patients respect isolation measures throughout the illness. Humans can also become infected through active contact with animals (rodents and monkeys) (Oladoye, 2021). Usually, the monkeypox infection starts from fever, headaches, body aches, weakness, etc. (Deresinski, 2022). The symptoms may lead to the appearance of a blistering rash consisting of fluid-filled blisters that progress to dryness and crusting, then scarring and itching after two days. The bubbles are most concentrated on the face, the forehands, and the feet soles. The disease is more severe in children as well as those who have weak immune systems. Historically, monkeypox was identified first in the 1970s, but recently the reemerging of the disease, cases are reported in various countries around the globe (ASSESSMENT, 2022). Usually monkeypox virus transmits from human interaction, but there is a significant risk of cross-infection (animal-to-human) spread (Petersen et al., 2019). Therefore, the hypothesis of cross-infection between human and animals play a significant role and can not be neglected.
Modeling and forecasting with the aid of dynamical system is a challenging domain in various discipline, e.g., infectious disease epidemiology (Brauer, 2017; Saravanakumar et al., 2020; Guo et al., 2020), health sciences (Choi et al., 2016), and various other fields of applied science and technology (Rolnick et al., 2022), and therefore attracted the considerable attention of researchers, see for instance, (Das et al., 2020b; Yin et al., 2021; Saha et al., 2021). Similarly, various models demonstrate different outlooks regarding the dynamical behavior of an epidemic (Busenberg & Cooke, 2012; Khajanchi et al., 2018; Das et al., 2020a). With the aim of these mathematical models, researchers want to understand the dynamics of a disease and then suggest control strategies to control or completely eradicate the infection (Chen & Guo, 2016; Kumar et al., 2019). Besides the rich literature on
infectious disease epidemiology, there have been no enough studies found to represent the temporal dynamics of monkeypox 2022, to the best of our knowledge. We try to formulate a model which describes the transmission of monkeypox 2022 to understand the dynamics and suggest a control mechanism with the aid of optimal control theory. We summarize our contributions in this work as follows:
• The cross infection between humans and animals plays a significant role in the dynamics of monkeypox virus transmission. We, therefore, propose a model based on the hypothesis of cross-infection between humans and animals. The model has two blocks: humans and animals.
• The first block describes the evolution of monkeypox in the human population, while the second block represents the evolution of the monkeypox virus among animals.
• Four time-dependent control measures are then introduced in the model to demonstrate the utilization of optimal control measures: to minimize the infectious individuals and maximize the recovered population. Particularly, reducing the risk of disease transmission by educating people to rise awareness of risk factors, treatment of infected individuals, and restrictions on animals.
2 RELATED WORK
The analysis of infectious diseases with the aid of dynamical systems is a fascinating outlook to predict the dynamics of an epidemic. In the history of infectious disease epidemiology, Kermack and McKendrick were the pioneers to develop the three-population-group epidemiological model (susceptible-infectious-recovered) (Kermack & McKendrick, 1927), where various population groups are employed to signify the infection, demonstrating their progression and interactions. The classical susceptible-infectious-recovered model formulated by Kermack and McKendrick has been modified by incorporating an exposed compartment known as the susceptibleexposed-infectious-recovered model (Anderson & May, 1979), which is also extensively used to delineate the transmission of distinct diseases. Data-driven modeling methods have been also used to investigate the transmission dynamics of infectious diseases (Heesterbeek et al., 2015). It is worthy to mention that the idea of the classical susceptible-infectious-removed model has been further investigated by various researchers to observe the transmission dynamics of distinct epidemics (see for instance Flaxman et al. (2020); Samui et al. (2020); Britton et al. (2020)). Optimal control theory has been extensively used and is very common in infectious disease epidemiology (Rohith & Devika, 2020; Khajanchi et al., 2021). The adjustment of epidemic parameters in a feasible way, by taking the limits on the system to optimize a given function, can be applied with the help of control theory. Both non-pharmaceutical and pharmaceutical control measures can be used to control the infection. Especially, the non-pharmaceutical intervention strategies play a key role. Usually, with the help of optimal control analysis, we are able to know how to eradicate the disease.
A re-emerging infectious disease of monkeypox 2022, was reported in May 2022. Here, we are interested to formulate a model by taking an extended susceptible–infected–recovered-type model with two compartmental blocks: humans and animals. We then discuss the qualitative analysis of the proposed two-strained model. Further, applying the theory of optimal control to understand the progression of the monkeypox virus transmission. Since it is not merely a medical problem, regarding a public health concern, both the combination of non-pharmaceutical and pharmaceutical intervention can be taken into account to propose a control mechanism for the control of monkeypox virus transmission.
3 THE MODEL
We propose an epidemiological model for the dynamics of monkeypox based on the cross infection hypothesis: animal to human and human to human. The various compartmental population of the model divided into two blocks: human and animal. The first block represents the evolution of the human population, consequently distributed into three epidemiological groups: sensitive individuals, infected by monkeypox, and recovered individuals, while the second block represents the evolution of animals, divided into two classes: susceptible animals and infected animals. To symbolize the population groups, let us assume that Sh(t) represents the sensitive individuals at time t, which are not infected but have a chance to be infected at time t+∆t (∆t is the small increment in time). Mh denotes the individual infected with monkeypox, and Rh is the recovered individuals. Similarly, the susceptible and infected animals are denoted by Sa(t) and Ia(t), respectively. Due to the assumption of a homogeneously mixed population for the successful transmission of the monkeypox virus, the risky humans will enter the infected human compartment at a rate β, as well as, the susceptible animal getting infected will move to the infected animal at a rate ϕ. The individual leaves the infected human class only after they fully recover or die. The recovered individuals will enter Rh. Moreover, the complete geometry of the epidemic problem is described by Figure 1, and thus, the evolution of the disease is represented by the following deterministic system of differential equations: dSh(t) dt = Φh − βMh(t)Sh(t)− ξMa(t)Sh(t)− ϑSh(t), dMh(t) dt = βMh(t)Sh(t) + ξMa(t)Sh(t)− (ϑ+ ϑ1 + r)Mh(t), dRh(t) dt = rMh(t)− ϑRh(t), dSa(t)
dt = Φa − ϕMa(t)Sa(t)− αSa(t),
dMa(t)
dt = ϕMa(t)Sa(t)− (α+ α1)Ma(t),
(1) with biologically feasible non-negative initial population sizes
Sh(0) > 0, Mh(0) ≥ 0, Rh(0) ≥ 0, Sa(0) > 0, Ma(0) ≥ 0. (2)
In the above epidemic problem (1)-(2), the parameters are described as: the newborn of human and animal are denoted by Φh and Φa, respectively, while the monkeypox virus transmission rates are β and ξ. The natural death rate of a human is assumed to be ϑ, while the same ratio for the animal population is denoted by α. The monkeypox virus transmission rate from one animal to another is assumed to be ϕ, and α1 is the death rate that arises from the infection of monkeypox virus in the animal population. Moreover, the disease-induced death rate of a human is represented by ϑ1, and r is the recovery rate of an infected human.
To proceed, first, we show the mathematical, as well as, the biological feasibility of the proposed epidemic problem. To this end, we show the following result.
Proposition 1 The solution of the model (1)-(2) is positive and bounded.
3.1 DYNAMICAL ANALYSIS
In this section, we discuss the temporal dynamics of the model to find the stability conditions for the monkeybox epidemic model. We find the monkeypox-free equilibrium state for the developed model (1) and calculate the reproductive number. Let W0 is the monkeypox-free equilibrium of the model, then, W0 = ( Φh ϑ , 0, 0, Φa α , 0 ) . Moreover, the reproductive parameter, denoted by Ro,
Parameters Indices % Increase or Decrease Impact on Ro
represents the average of the secondary infectious produced by an infective whenever put into a sensitive/susceptible individual. To calculate this quantity for the model reported in Eq.(1) by following (Van den Driessche & Watmough, 2002), let us assume that X = (Mh,Ma)⊤, then
dX
dt = F − V,
where, F and V are the 2 by 2 variational matrices at the monkeypox-free equilibrium defined as
F =
( βΦh ϑ ξΦa α
0 ϕΦaα
) , V = ( ϑ+ ϑ1 + r 0
0 α+ α1
) .
The reproductive number is the spectral radius of the matrix FV −1 and takes the following form
Ro = Rh +Ra, Rh = βΦh
ϑ (ϑ+ ϑ1 + r) , Ra = ϕΦa α (α+ α1) .
Since the reproductive number is the expected average number of secondary infections. It is concluded that whenever Ro < 1 the disease will die out, otherwise spread if Ro > 1.
3.2 BIOLOGICAL INTERPRETATION OF THE BASIC REPRODUCTIVE NUMBER
Since the initial spread of any epidemic is related to the reproductive number, we analyze the normalized sensitivity of the proposed system parameters. The sensitivity of the threshold quantity will enable us to recognize the most sensitive and effective parameters for disease transmission, because a small perturbation in the most sensitive parameter can produce a great influence on the associated epidemic model. To present the prediction for the prevalence of the monkeypox disease, reduction, and persistence in the transmission of infection, we perform sensitivity analysis of the basic reproductive number Ro. Let us assume that γ is any epidemic parameter, then the normalized forward sensitivity co-efficient (index) related to the basic reproductive number Ro is defined by:
ΥRoγ = ∂Ro ∂γ × γ Ro .
It is clear from the formula of normalized sensitivity coefficient that it may be dependent or independent of the model parameters. We calculate the associated sensitivity indices accordingly listed in Table 1. It can be observed that some of the indices of the model parameters are negative and some are positive. The negative and positive signs demonstrate that the perturbation to these parameters can produce a decrease or increase in the value of the basic reproductive number, respectively. For example, the forward sensitivity index of the parameter β is ΥRoβ = 0.3677, which indicates that if we increase the value of β by 10%, as a result, the value of the basic reproductive number Ro would increase by 3.677%. Similarly, the forward sensitivity indices of ϑ1 and ϕ are 0.0288 and 0.6322, respectively, which implies that if we perturb the value of ϑ1 and ϕ by 10% it would result in increase or decrease in the value of the basic reproductive number by 0.288% and 6.322%, respectively. On the other hand, the sensitivity indices of r, α, and α1 are negatively associated with the basic reproductive number, i.e., increasing their values would decrease the value of Ro. If we increase the value of r, α, and α1 by 10% it casts the decrease of 15.961% in the value of Ro. In this analysis, we observed that the most effective parameters of the proposed epidemic problem are β, r, ϕ, α1, and α, therefore, special attention is required for the parameters with highly sensitive indices to forecast the transmission of monkeypox disease. We now state the dynamics of the model by proving the following results.
Theorem 1 If Ro < 1, then the dynamical system (1) is locally and globally asymptotically stable around the monkeypox-free equilibrium state of the model and unstable if it is greater than unity.
4 OPTIMAL CONTROL
The application of optimal control theory is one of the important theoretical analyses associated with infectious diseases. We use this tool to produce the proper control mechanism for eliminating infection of monkeypox virus transmission. Our analysis is not limited to the theoretical analysis, and we also perform some numerical experiments to show the effect of the proposed control strategies on the dynamics of monkeypox virus transmission. The key goal is to reduce the infected humans and animals while maximizing the recovered human using the optimal control theory. The parameters with certain assumptions lead to the monkeypox model as described by Eq.(1), which is a coupled system with five state variables (Sh(t),Mh(t), Rh(t), Sa(t),Ma(t)). We introduce four control measures µi(t) (i = 1, 2, . . . , 4) that control the number of risky and infected individuals externally over a given time frame.
4.1 REDUCING THE RISK OF HUMAN TO HUMAN AND ZOONOTIC TRANSMISSION
The main prevention measure for monkeypox is educating people to rise awareness of risk factors about the control measures they can take for the reduction of the virus transmission. Due to sufficient information, the population needs to maintain social distance, wear masks, follow the strategy of home isolation, etc. Thus, we introduce the control factors (1 − η1µ1(t)) and (1 − η2µ2(t)) to control the interaction between susceptible/risky human Sh(t) with infected human Mh(t) and animal Ma(t), which represent the depletion in β and ξ, respectively, while η1 and η2 compute the usefulness of the control measure µ1(t) and µ2(t) (where µ1(t), µ2(t) ∈ [0, 1]), respectively. The most successful framework is µ1(t) ≡ 1 ≡ µ2(t), which indicates that when the interaction of susceptible humans with infected humans and animals is almost perfectly avoided it makes the transmission of the disease to zero. Here, µ1(t) ≡ 1 ≡ µ2(t) means fully response by implementing the given control mechanism, while µ1(t) ≡ 0 ≡ µ2(t) implies no response. The intensities of the responses are associated with the behavior of the human population, and so these response intensities are represented by µ1(t) and µ2(t) as control measures. We maximize the responses using isolation so that they change their behavior and the cost will correspond to a nonlinear function of µ1(t) and µ2(t). Thus, we wish to find the optimal response for risky individuals with the help of isolation as a control measure.
4.2 TREATMENT FOR INFECTED INDIVIDUALS
The treatment of infected individuals not only controls the number of the infected individuals but also influences its development. Although there is no proper treatment for the monkeypox virus infection, smallpox and monkeypox are genetically similar, and there are antiviral drugs against smallpox that can be used for treatment purposes. So in the present scenario, we assume the accessibility of treatment for the infected population. We introduce the term −η3µ3(t)Mh(t) as a treatment in the proposed model, where η3 represents the treatment rate associated with the intensity µ3(t). There are various costs associated with given medication, so we assume that the intensity of treatment control measure µ3(t) lies between 0 and unity. The control µ3(t) will attempt to change the fraction of the infected population to the recovered population.
4.3 RESTRICTION ON ANIMALS
While it seems to be difficult that how to restrict animals from the transmission of the monkeypox infection, it is possible. In the current situation, various countries have restricted the importation of animals (rodents) and non-human primates. Animals that are infected with monkeypox should be placed into quarantine immediately to be isolated from other animals. Moreover, an animal that has close contact with another infected animal should be also isolated and accordingly quarantined to observe the symptoms of monkeypox for 30 days. We introduce the control factor (1− η4µ4(t)) to control the interactions of susceptible and infected animals, where µ4(t) ∈ [0, 1].
In this section, the main objective is to obtain the optimal control strategy that minimizes the infected population with the aid of the above control measures and with the minimum associated cost. Thus, the admissible set of control measures µi(t) is defined by
U = {µi(t), i = 1, 2, . . . , 4 : 0 ≤ µi(t) ≤ 1, t ∈ [0, T ]} . We, therefore, develop the control problem by keeping in view the above strategies with the objective functional W ({µi}) to be minimized:
W (µi(t), i = 1, . . . , 4) = ∫ T 0 h1Mh(t)dt+ 1 2 ∫ T 0 4∑ i=1 κiµ 2 i (t)dt, (3)
subject to dSh(t)
dt = Φh − β {1− η1µ1(t)}Mh(t)Sh(t)− ξ {1− η2µ2(t)}Ma(t)Sh(t)− ϑSh(t),
dMh(t)
dt = β {1− η1µ1(t)}Mh(t)Sh(t) + ξ {1− η2µ2(t)}Ma(t)Sh(t)
− {ϑ+ ϑ1 + r + η3µ3(t)}Mh(t), dRh(t)
dt = {r + η3µ3(t)}Mh(t)− ϑRh(t),
dSa(t)
dt = Φa − ϕ {1− η4µ4(t)}Ma(t)Sa(t)− αSa(t),
dMa(t)
dt = ϕ {1− η4µ4(t)}Ma(t)Sa(t)− {α+ α1}Ma(t),
(4)
with the initial population sizes in Eq. (2). In Eq. (3), the integrand represents the value of cost at time t, while the function W shows the sum of the cost described by the integrand or the total incurred cost. The parameters h1 and κi’s are non-negative parameters that are weight constants to balance the units of the integrand. The control measures µ∗i (i = 1, 2, 3, 4) exist in the admissible control set U that minimize W . We now discuss the existence of optimal control for our proposed control problem (4), then use the well-known Pontryagin’s maximum principle for characterization and getting the necessary conditions of the optimal controls. The following result will be presented to ensure the existence of µ∗i that minimizes the function W .
Theorem 2 There exist optimal controls µ∗(t) = (µ1(t), µ2(t), µ3(t), µ4(t)) in U that minimize the objective function W associated with the control problem in Eqs.(4)–(3).
Since the above result ensures the existence of the controls to minimize the objective functional (3) subject to the state system (4), we then derive the necessary conditions for characterization of the optimal control problem using Pontryagin’s maximum principle, see Theorem 3 in the appendix.
5 NUMERICAL EXPERIMENTS
We perform numerical experiments to test the model predictions and verify the analytical findings. We utilize a well-known numerical procedure of the Runge-Kutta method of the 4th order. First, we discretize the model and develop the algorithm to perform the numerical simulations.
5.1 DISCRITIZATION
To discritize the model, we set
X = Sh Mh Rh Sa Ma , F = Φh − βMhSh − ξMaSh − ϑSh βMhSh + ξMaSh − (ϑ+ ϑ1 + r)Mh rMh − ϑRh Φa − ϕMaSa − αSa ϕMaSa − (α+ α1)Ma ,
Y = φ1 φ2 φ3 φ4 φ5 , G = Φh − β {1− η1µ1}MhSh − ξ {1− η2µ2}MaSh − ϑSh β {1− η1µ1}MhSh + ξ {1− η2µ2}MaSh − {ϑ+ ϑ1 + r + η3µ3}Mh {r + η3µ3}Mh − ϑRh Φa − ϕ {1− η4µ4}MaSa − αSa ϕ {1− η4µ4}MaSa − {α+ α1}Ma ,
and
H = {φ1 − φ2} {β (1− η1µ ∗ 1)M ∗ h + ξ(1− η2µ∗3)Ma}+ φ∗1ϑ,
{φ1 − φ2} {β (1− η1µ∗1)S∗h} − {ϑ+ ϑ1 + r + η3µ∗3}φ2 − {r + µ∗3}φ3 − h1, ϑφ3, {φ4 − φ5} {ϕ (1− η4µ∗4)Ma} − αφ4, {φ1 − φ2} {ξ (1− η2µ∗2)S∗h}+ {α+ α1}φ5 + {φ4 − φ5} {ϕ (1− η4µ∗4)S∗a} , , then Eq.(1), Eq.4), and Eq.(6) can be re-casted as
dX(t)
dt = F (t,X(t)),
dX(t, µ)
dt = G(t,X(µ)),
dY (t)
dt = H(t, φ). (5)
The application of forward and backward Runge-Kutta method of order four gives
Xi+1 = Xi + l
6 (h1 + 2h2 + 2h3 + h4) , Yi−1 = Yi −
l 6 (k1 + 2k2 + 2k3 + k4) , (6)
where
h1 = F (tn, Xn), h2 = F ( tn + l
2 , Xn + lh1 2
) , h3 = F ( tn + l
2 , Xn + lh2 2
) ,
h4 = F (tn + h,Xn + lh3) , k1 = F (tn, Xn), k2 = F ( tn − l
2 , Xn − lk1 2
) ,
k3 = F ( tn − l
2 , Xn − lk2 2
) , k4 = F (tn − k,Xn − lk3) .
Thus the rest of algorithms can be concluded as:
Algorithm 1 Runge-Kutta Method (RK4) 1: Input: Endpoints t0, tmax, integer n, parametric values, initial conditions 2: Output: approximation Sh, Mh, Rh, Sa, Ma at (n+ 1) values of t 3: Parameters and Initial Conditions: Setting the values of epidemic parameters and initial sizes
for compartmental populations 4: for i = 1, · · · , n do 5: Recursive Formula: Xi+1 for both control and without control system as given in Eq.(5) and Eq.(6) 6: end for 7: for i = 1, · · · , n, j = n+ 2− i do 8: Recursive Formula: Yi−1 as given in Eq.(5) and Eq.(6) 9: end for
10: Optimal Control: Plugging optimal control variables as given by Eq.(7) 11: Output: ( ti, S i+1 h ,M i+1 h , R i+1 h , S i+1 a ,M i+1 a )
5.2 DISCUSSION
We perform numerical simulations to discuss sensitivity analysis and the application of optimal control strategies. We conduct numerical experiments to present the validation of our theoretical findings for the model parameters and initial sizes of populations as specified in Table 2. It could be noted that some of the parameters are directly correlated to the basic reproductive number, Ro, while some are negatively correlated, as shown in Figure 2. Figure 3 represents the contour plot that describes the dependency of the basic reproductive number, Ro, on β (disease transmission co-efficient of humans) and ϕ (disease transmission co-efficient of animals); β and ϑ (natural death rate); β and r (recovery rate of infected population); and ϕ and α1 (the death rate arises from monkeypox in animal population). It is very much clear from Table 1 and Figure 2 that the epidemic parameters, namely, β and ϕ have positive indices, and are also associated with the susceptible population. If we increase the value of β and ϕ, it will increase the value of basic reproductive number Ro and cross the value Ro = 1, which leads to a substantial outbreak of the monkeypox virus transmission. To maintain the value of the basic reproductive number and deduce a favorable
control method to eradicate the spread of monkeypox virus transmission, we need to restrict the value of these parameters, which is possible with the help of contact tracing and maintaining social distancing. On the other hand, the parameters with negative indices are ϑ, r, α, and α1 as shown in Figure 2, while the relative effect is reported in Figure 3. Moreover, whenever the value of those parameters having negative indices increases, the basic reproductive number Ro will decrease, and if its value becomes less than unity, the infection will no longer persist. So, treatment of infected humans through medication has been incorporated with the aid of optimal control to eliminate the monkeypox virus. To observe the influence of intervention strategies on the transmission dynamics of monkeypox, we perform the numerical simulations of the proposed optimal control problem with the help of the Runge-Kutta (RK4) scheme, as concluded in Algorithm 1. Moreover, the time frame is taken to be 20 units, and the value of parameters are borrowed from Table 2 while investigating and implementing the optimal control mechanism. To investigate the effect of optimal control measures for the monkeypox virus transmission, we execute the proposed problem in two folds: without control µ1(t) = µ2(t) = µ3(t) = µ4(t) = 0 and with the combination of four controls (µ1(t), µ2(t), µ3(t), µ4(t)) which leads to the results as presented in Figure 4. These graphs respectively demonstrate the dynamics of human and animal compartments of the proposed model under no control and with controls. The black dashed and red dashed curves respectively represent the dynamics of each compartmental population with and without the utilization of control strategies to highlight the effect of optimal policies implementation, see Figure 4. A significant reduction in the infected population, as well as, an increase in the non-infected population can be seen with the application of optimal control measures, whenever, compared without intervention strategies. We conclude that the combination of suggested optimal control measures can achieve a significant reduction in monkeypox virus transmission whenever applied in a true sense.
6 CONCLUSION
To keep the current scenario of the monkeypox virus transmission in our mind, we investigated and proposed a model for the dynamics of monkeypox virus transmission. Both theoretical and numerical analyses of the proposed epidemic problem have been studied with the aid of stability theory. Positivity, as well as, the boundedness of the states of the epidemic problem guarantees that the considered model is a well-defined dynamical system. We performed the local and global dynamics of the model and derived the stability conditions. The proposed model has several epidemic parameters and so with the help of normalized sensitivity analysis, the most significant parameters are quantified. It could be observed that disease transmission from both human-to-human and animals to human play a significant role in disease transmission. Besides the disease transmission rate, the parameter r is also very effective and has a major effect on infected individuals and Ro. Moreover, we modified the proposed model by incorporating optimal measures with the aid of control theory to mitigate the monkeypox disease burden and describe the effect of control policies implementation, and as a result to stop the disease from spreading. The findings of our optimal control problem show that the maintenance of the social distancing, tracing contact, transportation of animals and treatment of infected individuals may support mitigating monkeypox virus spreading in the current scenario of the epidemic.
Given the increased development of fractional calculus, many operators of fractional orders were introduced to capture more valuable information. In our future work, we will generalize the proposed model to its associated fractional version to discuss the dynamics of monkeypox virus disease. | 1. What is the focus of the paper regarding the dynamics of monkeypox?
2. What are the strengths and weaknesses of the proposed model, particularly in terms of its novelty and methodological contributions?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Do you have any concerns or suggestions regarding the paper, such as improving the evaluation or discussing the model's applicability to other zoonotic diseases? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
In this work, the authors develop a differential equation-based model of the dynamics for monkeypox. The model explicitly includes both human and animal populations, as well as the cross-transmission from animal to human. The authors also propose several control measures affecting various transition rates in the model; a control theory approach is then used to derive optimal policies for the control measures. Finally, a limited set of numerical simulations confirm the derived analytic results concerning the plausibility of the model.
Strengths And Weaknesses
The main strength of the paper lies in the novelty of explicitly modeling the cross-transmission between human and animal populations.
The main weaknesses of the work are in the limited methodological contributions as well as lack of data-driven evaluation of the model. As described in more detail below, the evaluation could be improved by either including some type of real-world data for validating the model or at least providing a more thorough qualitative analysis (supported by synthetic/simulated data).
Clarity, Quality, Novelty And Reproducibility
Novelty
As mentioned above, the main novelty of the work lies in explicitly modeling the human and animal interaction; however, other studies have also incorporated such spillover modeling (see, e.g., the review by [Lloyd-Smith et al., Science 2009] for references). The remaining derivations and methodological approaches appear to be standard methods used in expected ways.
Quality
I am not an expert in the area, and I did not verify the derivations in detail. Nevertheless, they appeared generally correct.
The experiments and related discussion was rather limited. Of course, the main contribution of this work is the model itself. Still, there was no type of data-driven evaluation of even qualitative discussion about whether the results from the model were plausible. Publicly-available data for at least the human side of such models seem to be available (e.g., the data used in [Purkayastha et al., BMC Infectious Disease 2021], but many others). While the exact numbers for the animals may not be available, they could be treated as hyperparameters of the model. Alternatively, simulated or synthetic data (which may adhere more or less closely to the assumptions in the proposed model) could be used to evaluate in which conditions the models are accurate. Without any such evaluation, though, it is difficult to tell if the models are meaningful.
Similarly, although the control measures were introduced and associated with a cost, the optimal policy was not really discussed. Additionally, the costs presumably give rise to a Pareto front of multiple possible policies with different tradeoffs among the different control measures. In principle, such tradeoffs could be used to guide policymakers. However, these were not at all discussed in the paper.
Finally, it is not clear to what extent the proposed model is specific to monkeypox or whether it would be applicable to most (or even all) other zoonotic diseases. Elaborating on this could help broaden the impact of this work.
Clarity
The paper has numerous grammatical mistakes. I do not believe they affect understanding the paper, but they do become distracting. The paper needs another round of editing.
The references are not consistently formatted.
Otherwise, the paper is generally clear, and the various model parameters are well-described. Some of the lengthy equations could likely be simplified or moved to an appendix.
Reproducibility
The submission does not include any supplementary code or data. Nevertheless, I believe results similar to those in the paper could be reproduced based on the given equations. |
ICLR | Title
Monkeypox with Cross Infection Hypothesis via Epidemiological Mode
Abstract
A new re-emerging infectious disease of monkeypox 2022 is structurally related to smallpox that is induced by the monkeypox viruses and has caused 59,606 active cases with 18 deaths up to September 15, 2022. To end this ongoing epidemic, there is a need for population-wide control policies like reducing social interaction by keeping social distance, treatment of infected individuals, and restriction on animals, etc. We forecast the progression of the epidemic and come up with an efficient control mechanism by formulating a mathematical model. The biological feasibility and dynamical behavior of the proposed model are then investigated together with sensitivity analysis to obtain the effect of various epidemic parameters mitigating the spread of the disease. Subsequently, by taking non-pharmaceutical and pharmaceutical intervention strategies as control measures, an optimal control theory is applied to mitigate the fatality of the disease to minimize the infectious population and reduce the cost of controls, we construct an objective functional and solve it by using Pontryagin’s maximum principle. Finally, extensive numerical simulations are performed to show the impact of the application of intervention mechanisms in controlling the transmission of the monkeypox epidemic.
1 INTRODUCTION
The infection of monkeypox is a contagious disease resulting from the orthopoxvirus. This infection is zoonotic and was initially transported to humans by wild rodents in central and western Africa. But human-to-human spread (horizontal transmission) is also possible, particularly within the family home or in the context of care (Farahat et al., 2022). The monkeypox viruses can be diffused by immediate contact with lesions on the skin or mucous membranes of a sick person, as well as by droplets (sneezing, saliva, sputters, etc.) (Singh et al., 2021). Generally, an individual can become infected through contact with patient’s environment. It is, therefore, important that patients respect isolation measures throughout the illness. Humans can also become infected through active contact with animals (rodents and monkeys) (Oladoye, 2021). Usually, the monkeypox infection starts from fever, headaches, body aches, weakness, etc. (Deresinski, 2022). The symptoms may lead to the appearance of a blistering rash consisting of fluid-filled blisters that progress to dryness and crusting, then scarring and itching after two days. The bubbles are most concentrated on the face, the forehands, and the feet soles. The disease is more severe in children as well as those who have weak immune systems. Historically, monkeypox was identified first in the 1970s, but recently the reemerging of the disease, cases are reported in various countries around the globe (ASSESSMENT, 2022). Usually monkeypox virus transmits from human interaction, but there is a significant risk of cross-infection (animal-to-human) spread (Petersen et al., 2019). Therefore, the hypothesis of cross-infection between human and animals play a significant role and can not be neglected.
Modeling and forecasting with the aid of dynamical system is a challenging domain in various discipline, e.g., infectious disease epidemiology (Brauer, 2017; Saravanakumar et al., 2020; Guo et al., 2020), health sciences (Choi et al., 2016), and various other fields of applied science and technology (Rolnick et al., 2022), and therefore attracted the considerable attention of researchers, see for instance, (Das et al., 2020b; Yin et al., 2021; Saha et al., 2021). Similarly, various models demonstrate different outlooks regarding the dynamical behavior of an epidemic (Busenberg & Cooke, 2012; Khajanchi et al., 2018; Das et al., 2020a). With the aim of these mathematical models, researchers want to understand the dynamics of a disease and then suggest control strategies to control or completely eradicate the infection (Chen & Guo, 2016; Kumar et al., 2019). Besides the rich literature on
infectious disease epidemiology, there have been no enough studies found to represent the temporal dynamics of monkeypox 2022, to the best of our knowledge. We try to formulate a model which describes the transmission of monkeypox 2022 to understand the dynamics and suggest a control mechanism with the aid of optimal control theory. We summarize our contributions in this work as follows:
• The cross infection between humans and animals plays a significant role in the dynamics of monkeypox virus transmission. We, therefore, propose a model based on the hypothesis of cross-infection between humans and animals. The model has two blocks: humans and animals.
• The first block describes the evolution of monkeypox in the human population, while the second block represents the evolution of the monkeypox virus among animals.
• Four time-dependent control measures are then introduced in the model to demonstrate the utilization of optimal control measures: to minimize the infectious individuals and maximize the recovered population. Particularly, reducing the risk of disease transmission by educating people to rise awareness of risk factors, treatment of infected individuals, and restrictions on animals.
2 RELATED WORK
The analysis of infectious diseases with the aid of dynamical systems is a fascinating outlook to predict the dynamics of an epidemic. In the history of infectious disease epidemiology, Kermack and McKendrick were the pioneers to develop the three-population-group epidemiological model (susceptible-infectious-recovered) (Kermack & McKendrick, 1927), where various population groups are employed to signify the infection, demonstrating their progression and interactions. The classical susceptible-infectious-recovered model formulated by Kermack and McKendrick has been modified by incorporating an exposed compartment known as the susceptibleexposed-infectious-recovered model (Anderson & May, 1979), which is also extensively used to delineate the transmission of distinct diseases. Data-driven modeling methods have been also used to investigate the transmission dynamics of infectious diseases (Heesterbeek et al., 2015). It is worthy to mention that the idea of the classical susceptible-infectious-removed model has been further investigated by various researchers to observe the transmission dynamics of distinct epidemics (see for instance Flaxman et al. (2020); Samui et al. (2020); Britton et al. (2020)). Optimal control theory has been extensively used and is very common in infectious disease epidemiology (Rohith & Devika, 2020; Khajanchi et al., 2021). The adjustment of epidemic parameters in a feasible way, by taking the limits on the system to optimize a given function, can be applied with the help of control theory. Both non-pharmaceutical and pharmaceutical control measures can be used to control the infection. Especially, the non-pharmaceutical intervention strategies play a key role. Usually, with the help of optimal control analysis, we are able to know how to eradicate the disease.
A re-emerging infectious disease of monkeypox 2022, was reported in May 2022. Here, we are interested to formulate a model by taking an extended susceptible–infected–recovered-type model with two compartmental blocks: humans and animals. We then discuss the qualitative analysis of the proposed two-strained model. Further, applying the theory of optimal control to understand the progression of the monkeypox virus transmission. Since it is not merely a medical problem, regarding a public health concern, both the combination of non-pharmaceutical and pharmaceutical intervention can be taken into account to propose a control mechanism for the control of monkeypox virus transmission.
3 THE MODEL
We propose an epidemiological model for the dynamics of monkeypox based on the cross infection hypothesis: animal to human and human to human. The various compartmental population of the model divided into two blocks: human and animal. The first block represents the evolution of the human population, consequently distributed into three epidemiological groups: sensitive individuals, infected by monkeypox, and recovered individuals, while the second block represents the evolution of animals, divided into two classes: susceptible animals and infected animals. To symbolize the population groups, let us assume that Sh(t) represents the sensitive individuals at time t, which are not infected but have a chance to be infected at time t+∆t (∆t is the small increment in time). Mh denotes the individual infected with monkeypox, and Rh is the recovered individuals. Similarly, the susceptible and infected animals are denoted by Sa(t) and Ia(t), respectively. Due to the assumption of a homogeneously mixed population for the successful transmission of the monkeypox virus, the risky humans will enter the infected human compartment at a rate β, as well as, the susceptible animal getting infected will move to the infected animal at a rate ϕ. The individual leaves the infected human class only after they fully recover or die. The recovered individuals will enter Rh. Moreover, the complete geometry of the epidemic problem is described by Figure 1, and thus, the evolution of the disease is represented by the following deterministic system of differential equations: dSh(t) dt = Φh − βMh(t)Sh(t)− ξMa(t)Sh(t)− ϑSh(t), dMh(t) dt = βMh(t)Sh(t) + ξMa(t)Sh(t)− (ϑ+ ϑ1 + r)Mh(t), dRh(t) dt = rMh(t)− ϑRh(t), dSa(t)
dt = Φa − ϕMa(t)Sa(t)− αSa(t),
dMa(t)
dt = ϕMa(t)Sa(t)− (α+ α1)Ma(t),
(1) with biologically feasible non-negative initial population sizes
Sh(0) > 0, Mh(0) ≥ 0, Rh(0) ≥ 0, Sa(0) > 0, Ma(0) ≥ 0. (2)
In the above epidemic problem (1)-(2), the parameters are described as: the newborn of human and animal are denoted by Φh and Φa, respectively, while the monkeypox virus transmission rates are β and ξ. The natural death rate of a human is assumed to be ϑ, while the same ratio for the animal population is denoted by α. The monkeypox virus transmission rate from one animal to another is assumed to be ϕ, and α1 is the death rate that arises from the infection of monkeypox virus in the animal population. Moreover, the disease-induced death rate of a human is represented by ϑ1, and r is the recovery rate of an infected human.
To proceed, first, we show the mathematical, as well as, the biological feasibility of the proposed epidemic problem. To this end, we show the following result.
Proposition 1 The solution of the model (1)-(2) is positive and bounded.
3.1 DYNAMICAL ANALYSIS
In this section, we discuss the temporal dynamics of the model to find the stability conditions for the monkeybox epidemic model. We find the monkeypox-free equilibrium state for the developed model (1) and calculate the reproductive number. Let W0 is the monkeypox-free equilibrium of the model, then, W0 = ( Φh ϑ , 0, 0, Φa α , 0 ) . Moreover, the reproductive parameter, denoted by Ro,
Parameters Indices % Increase or Decrease Impact on Ro
represents the average of the secondary infectious produced by an infective whenever put into a sensitive/susceptible individual. To calculate this quantity for the model reported in Eq.(1) by following (Van den Driessche & Watmough, 2002), let us assume that X = (Mh,Ma)⊤, then
dX
dt = F − V,
where, F and V are the 2 by 2 variational matrices at the monkeypox-free equilibrium defined as
F =
( βΦh ϑ ξΦa α
0 ϕΦaα
) , V = ( ϑ+ ϑ1 + r 0
0 α+ α1
) .
The reproductive number is the spectral radius of the matrix FV −1 and takes the following form
Ro = Rh +Ra, Rh = βΦh
ϑ (ϑ+ ϑ1 + r) , Ra = ϕΦa α (α+ α1) .
Since the reproductive number is the expected average number of secondary infections. It is concluded that whenever Ro < 1 the disease will die out, otherwise spread if Ro > 1.
3.2 BIOLOGICAL INTERPRETATION OF THE BASIC REPRODUCTIVE NUMBER
Since the initial spread of any epidemic is related to the reproductive number, we analyze the normalized sensitivity of the proposed system parameters. The sensitivity of the threshold quantity will enable us to recognize the most sensitive and effective parameters for disease transmission, because a small perturbation in the most sensitive parameter can produce a great influence on the associated epidemic model. To present the prediction for the prevalence of the monkeypox disease, reduction, and persistence in the transmission of infection, we perform sensitivity analysis of the basic reproductive number Ro. Let us assume that γ is any epidemic parameter, then the normalized forward sensitivity co-efficient (index) related to the basic reproductive number Ro is defined by:
ΥRoγ = ∂Ro ∂γ × γ Ro .
It is clear from the formula of normalized sensitivity coefficient that it may be dependent or independent of the model parameters. We calculate the associated sensitivity indices accordingly listed in Table 1. It can be observed that some of the indices of the model parameters are negative and some are positive. The negative and positive signs demonstrate that the perturbation to these parameters can produce a decrease or increase in the value of the basic reproductive number, respectively. For example, the forward sensitivity index of the parameter β is ΥRoβ = 0.3677, which indicates that if we increase the value of β by 10%, as a result, the value of the basic reproductive number Ro would increase by 3.677%. Similarly, the forward sensitivity indices of ϑ1 and ϕ are 0.0288 and 0.6322, respectively, which implies that if we perturb the value of ϑ1 and ϕ by 10% it would result in increase or decrease in the value of the basic reproductive number by 0.288% and 6.322%, respectively. On the other hand, the sensitivity indices of r, α, and α1 are negatively associated with the basic reproductive number, i.e., increasing their values would decrease the value of Ro. If we increase the value of r, α, and α1 by 10% it casts the decrease of 15.961% in the value of Ro. In this analysis, we observed that the most effective parameters of the proposed epidemic problem are β, r, ϕ, α1, and α, therefore, special attention is required for the parameters with highly sensitive indices to forecast the transmission of monkeypox disease. We now state the dynamics of the model by proving the following results.
Theorem 1 If Ro < 1, then the dynamical system (1) is locally and globally asymptotically stable around the monkeypox-free equilibrium state of the model and unstable if it is greater than unity.
4 OPTIMAL CONTROL
The application of optimal control theory is one of the important theoretical analyses associated with infectious diseases. We use this tool to produce the proper control mechanism for eliminating infection of monkeypox virus transmission. Our analysis is not limited to the theoretical analysis, and we also perform some numerical experiments to show the effect of the proposed control strategies on the dynamics of monkeypox virus transmission. The key goal is to reduce the infected humans and animals while maximizing the recovered human using the optimal control theory. The parameters with certain assumptions lead to the monkeypox model as described by Eq.(1), which is a coupled system with five state variables (Sh(t),Mh(t), Rh(t), Sa(t),Ma(t)). We introduce four control measures µi(t) (i = 1, 2, . . . , 4) that control the number of risky and infected individuals externally over a given time frame.
4.1 REDUCING THE RISK OF HUMAN TO HUMAN AND ZOONOTIC TRANSMISSION
The main prevention measure for monkeypox is educating people to rise awareness of risk factors about the control measures they can take for the reduction of the virus transmission. Due to sufficient information, the population needs to maintain social distance, wear masks, follow the strategy of home isolation, etc. Thus, we introduce the control factors (1 − η1µ1(t)) and (1 − η2µ2(t)) to control the interaction between susceptible/risky human Sh(t) with infected human Mh(t) and animal Ma(t), which represent the depletion in β and ξ, respectively, while η1 and η2 compute the usefulness of the control measure µ1(t) and µ2(t) (where µ1(t), µ2(t) ∈ [0, 1]), respectively. The most successful framework is µ1(t) ≡ 1 ≡ µ2(t), which indicates that when the interaction of susceptible humans with infected humans and animals is almost perfectly avoided it makes the transmission of the disease to zero. Here, µ1(t) ≡ 1 ≡ µ2(t) means fully response by implementing the given control mechanism, while µ1(t) ≡ 0 ≡ µ2(t) implies no response. The intensities of the responses are associated with the behavior of the human population, and so these response intensities are represented by µ1(t) and µ2(t) as control measures. We maximize the responses using isolation so that they change their behavior and the cost will correspond to a nonlinear function of µ1(t) and µ2(t). Thus, we wish to find the optimal response for risky individuals with the help of isolation as a control measure.
4.2 TREATMENT FOR INFECTED INDIVIDUALS
The treatment of infected individuals not only controls the number of the infected individuals but also influences its development. Although there is no proper treatment for the monkeypox virus infection, smallpox and monkeypox are genetically similar, and there are antiviral drugs against smallpox that can be used for treatment purposes. So in the present scenario, we assume the accessibility of treatment for the infected population. We introduce the term −η3µ3(t)Mh(t) as a treatment in the proposed model, where η3 represents the treatment rate associated with the intensity µ3(t). There are various costs associated with given medication, so we assume that the intensity of treatment control measure µ3(t) lies between 0 and unity. The control µ3(t) will attempt to change the fraction of the infected population to the recovered population.
4.3 RESTRICTION ON ANIMALS
While it seems to be difficult that how to restrict animals from the transmission of the monkeypox infection, it is possible. In the current situation, various countries have restricted the importation of animals (rodents) and non-human primates. Animals that are infected with monkeypox should be placed into quarantine immediately to be isolated from other animals. Moreover, an animal that has close contact with another infected animal should be also isolated and accordingly quarantined to observe the symptoms of monkeypox for 30 days. We introduce the control factor (1− η4µ4(t)) to control the interactions of susceptible and infected animals, where µ4(t) ∈ [0, 1].
In this section, the main objective is to obtain the optimal control strategy that minimizes the infected population with the aid of the above control measures and with the minimum associated cost. Thus, the admissible set of control measures µi(t) is defined by
U = {µi(t), i = 1, 2, . . . , 4 : 0 ≤ µi(t) ≤ 1, t ∈ [0, T ]} . We, therefore, develop the control problem by keeping in view the above strategies with the objective functional W ({µi}) to be minimized:
W (µi(t), i = 1, . . . , 4) = ∫ T 0 h1Mh(t)dt+ 1 2 ∫ T 0 4∑ i=1 κiµ 2 i (t)dt, (3)
subject to dSh(t)
dt = Φh − β {1− η1µ1(t)}Mh(t)Sh(t)− ξ {1− η2µ2(t)}Ma(t)Sh(t)− ϑSh(t),
dMh(t)
dt = β {1− η1µ1(t)}Mh(t)Sh(t) + ξ {1− η2µ2(t)}Ma(t)Sh(t)
− {ϑ+ ϑ1 + r + η3µ3(t)}Mh(t), dRh(t)
dt = {r + η3µ3(t)}Mh(t)− ϑRh(t),
dSa(t)
dt = Φa − ϕ {1− η4µ4(t)}Ma(t)Sa(t)− αSa(t),
dMa(t)
dt = ϕ {1− η4µ4(t)}Ma(t)Sa(t)− {α+ α1}Ma(t),
(4)
with the initial population sizes in Eq. (2). In Eq. (3), the integrand represents the value of cost at time t, while the function W shows the sum of the cost described by the integrand or the total incurred cost. The parameters h1 and κi’s are non-negative parameters that are weight constants to balance the units of the integrand. The control measures µ∗i (i = 1, 2, 3, 4) exist in the admissible control set U that minimize W . We now discuss the existence of optimal control for our proposed control problem (4), then use the well-known Pontryagin’s maximum principle for characterization and getting the necessary conditions of the optimal controls. The following result will be presented to ensure the existence of µ∗i that minimizes the function W .
Theorem 2 There exist optimal controls µ∗(t) = (µ1(t), µ2(t), µ3(t), µ4(t)) in U that minimize the objective function W associated with the control problem in Eqs.(4)–(3).
Since the above result ensures the existence of the controls to minimize the objective functional (3) subject to the state system (4), we then derive the necessary conditions for characterization of the optimal control problem using Pontryagin’s maximum principle, see Theorem 3 in the appendix.
5 NUMERICAL EXPERIMENTS
We perform numerical experiments to test the model predictions and verify the analytical findings. We utilize a well-known numerical procedure of the Runge-Kutta method of the 4th order. First, we discretize the model and develop the algorithm to perform the numerical simulations.
5.1 DISCRITIZATION
To discritize the model, we set
X = Sh Mh Rh Sa Ma , F = Φh − βMhSh − ξMaSh − ϑSh βMhSh + ξMaSh − (ϑ+ ϑ1 + r)Mh rMh − ϑRh Φa − ϕMaSa − αSa ϕMaSa − (α+ α1)Ma ,
Y = φ1 φ2 φ3 φ4 φ5 , G = Φh − β {1− η1µ1}MhSh − ξ {1− η2µ2}MaSh − ϑSh β {1− η1µ1}MhSh + ξ {1− η2µ2}MaSh − {ϑ+ ϑ1 + r + η3µ3}Mh {r + η3µ3}Mh − ϑRh Φa − ϕ {1− η4µ4}MaSa − αSa ϕ {1− η4µ4}MaSa − {α+ α1}Ma ,
and
H = {φ1 − φ2} {β (1− η1µ ∗ 1)M ∗ h + ξ(1− η2µ∗3)Ma}+ φ∗1ϑ,
{φ1 − φ2} {β (1− η1µ∗1)S∗h} − {ϑ+ ϑ1 + r + η3µ∗3}φ2 − {r + µ∗3}φ3 − h1, ϑφ3, {φ4 − φ5} {ϕ (1− η4µ∗4)Ma} − αφ4, {φ1 − φ2} {ξ (1− η2µ∗2)S∗h}+ {α+ α1}φ5 + {φ4 − φ5} {ϕ (1− η4µ∗4)S∗a} , , then Eq.(1), Eq.4), and Eq.(6) can be re-casted as
dX(t)
dt = F (t,X(t)),
dX(t, µ)
dt = G(t,X(µ)),
dY (t)
dt = H(t, φ). (5)
The application of forward and backward Runge-Kutta method of order four gives
Xi+1 = Xi + l
6 (h1 + 2h2 + 2h3 + h4) , Yi−1 = Yi −
l 6 (k1 + 2k2 + 2k3 + k4) , (6)
where
h1 = F (tn, Xn), h2 = F ( tn + l
2 , Xn + lh1 2
) , h3 = F ( tn + l
2 , Xn + lh2 2
) ,
h4 = F (tn + h,Xn + lh3) , k1 = F (tn, Xn), k2 = F ( tn − l
2 , Xn − lk1 2
) ,
k3 = F ( tn − l
2 , Xn − lk2 2
) , k4 = F (tn − k,Xn − lk3) .
Thus the rest of algorithms can be concluded as:
Algorithm 1 Runge-Kutta Method (RK4) 1: Input: Endpoints t0, tmax, integer n, parametric values, initial conditions 2: Output: approximation Sh, Mh, Rh, Sa, Ma at (n+ 1) values of t 3: Parameters and Initial Conditions: Setting the values of epidemic parameters and initial sizes
for compartmental populations 4: for i = 1, · · · , n do 5: Recursive Formula: Xi+1 for both control and without control system as given in Eq.(5) and Eq.(6) 6: end for 7: for i = 1, · · · , n, j = n+ 2− i do 8: Recursive Formula: Yi−1 as given in Eq.(5) and Eq.(6) 9: end for
10: Optimal Control: Plugging optimal control variables as given by Eq.(7) 11: Output: ( ti, S i+1 h ,M i+1 h , R i+1 h , S i+1 a ,M i+1 a )
5.2 DISCUSSION
We perform numerical simulations to discuss sensitivity analysis and the application of optimal control strategies. We conduct numerical experiments to present the validation of our theoretical findings for the model parameters and initial sizes of populations as specified in Table 2. It could be noted that some of the parameters are directly correlated to the basic reproductive number, Ro, while some are negatively correlated, as shown in Figure 2. Figure 3 represents the contour plot that describes the dependency of the basic reproductive number, Ro, on β (disease transmission co-efficient of humans) and ϕ (disease transmission co-efficient of animals); β and ϑ (natural death rate); β and r (recovery rate of infected population); and ϕ and α1 (the death rate arises from monkeypox in animal population). It is very much clear from Table 1 and Figure 2 that the epidemic parameters, namely, β and ϕ have positive indices, and are also associated with the susceptible population. If we increase the value of β and ϕ, it will increase the value of basic reproductive number Ro and cross the value Ro = 1, which leads to a substantial outbreak of the monkeypox virus transmission. To maintain the value of the basic reproductive number and deduce a favorable
control method to eradicate the spread of monkeypox virus transmission, we need to restrict the value of these parameters, which is possible with the help of contact tracing and maintaining social distancing. On the other hand, the parameters with negative indices are ϑ, r, α, and α1 as shown in Figure 2, while the relative effect is reported in Figure 3. Moreover, whenever the value of those parameters having negative indices increases, the basic reproductive number Ro will decrease, and if its value becomes less than unity, the infection will no longer persist. So, treatment of infected humans through medication has been incorporated with the aid of optimal control to eliminate the monkeypox virus. To observe the influence of intervention strategies on the transmission dynamics of monkeypox, we perform the numerical simulations of the proposed optimal control problem with the help of the Runge-Kutta (RK4) scheme, as concluded in Algorithm 1. Moreover, the time frame is taken to be 20 units, and the value of parameters are borrowed from Table 2 while investigating and implementing the optimal control mechanism. To investigate the effect of optimal control measures for the monkeypox virus transmission, we execute the proposed problem in two folds: without control µ1(t) = µ2(t) = µ3(t) = µ4(t) = 0 and with the combination of four controls (µ1(t), µ2(t), µ3(t), µ4(t)) which leads to the results as presented in Figure 4. These graphs respectively demonstrate the dynamics of human and animal compartments of the proposed model under no control and with controls. The black dashed and red dashed curves respectively represent the dynamics of each compartmental population with and without the utilization of control strategies to highlight the effect of optimal policies implementation, see Figure 4. A significant reduction in the infected population, as well as, an increase in the non-infected population can be seen with the application of optimal control measures, whenever, compared without intervention strategies. We conclude that the combination of suggested optimal control measures can achieve a significant reduction in monkeypox virus transmission whenever applied in a true sense.
6 CONCLUSION
To keep the current scenario of the monkeypox virus transmission in our mind, we investigated and proposed a model for the dynamics of monkeypox virus transmission. Both theoretical and numerical analyses of the proposed epidemic problem have been studied with the aid of stability theory. Positivity, as well as, the boundedness of the states of the epidemic problem guarantees that the considered model is a well-defined dynamical system. We performed the local and global dynamics of the model and derived the stability conditions. The proposed model has several epidemic parameters and so with the help of normalized sensitivity analysis, the most significant parameters are quantified. It could be observed that disease transmission from both human-to-human and animals to human play a significant role in disease transmission. Besides the disease transmission rate, the parameter r is also very effective and has a major effect on infected individuals and Ro. Moreover, we modified the proposed model by incorporating optimal measures with the aid of control theory to mitigate the monkeypox disease burden and describe the effect of control policies implementation, and as a result to stop the disease from spreading. The findings of our optimal control problem show that the maintenance of the social distancing, tracing contact, transportation of animals and treatment of infected individuals may support mitigating monkeypox virus spreading in the current scenario of the epidemic.
Given the increased development of fractional calculus, many operators of fractional orders were introduced to capture more valuable information. In our future work, we will generalize the proposed model to its associated fractional version to discuss the dynamics of monkeypox virus disease. | 1. What is the focus of the paper in terms of disease modeling?
2. What are the strengths and weaknesses of the proposed approach?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Is there any concern regarding the suitability of the paper for a specific journal? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
In their manuscript entitled, "Monkeypox 2022 with cross infection hypothesis via epidemiological model", the authors present a two-population (human and animal) compartmental model for describing the evolution of the 2022 monkeypox outbreak. Given various intervention options the authors present an investigation of optimal control policies for this system.
Strengths And Weaknesses
Considered across the spectrum of modelling approaches for all diseases, a two population compartmental model is at the lower level of complexity of those in use; its formal analysis by optimal control theory is less common and would be of interest to numerical modellers working in this space. The investigation here is of a rather theoretical nature, more so than practical: that is, the problem is formulated in a general sense, rather than in collaboration with an actual disease control program or with a confrontation against real world data. This would be suitable for a mathematical disease modelling journal; however, I do not see it as being suitable for ICLR as I cannot discern a novel contribution in the domain of deep learning (or an adjacent field).
Clarity, Quality, Novelty And Reproducibility
The clarity is good (although I would suggest removing the double arrow heads on the cross-infection figure), quality is high for a paper in its field, novelty is modest and reproducibility would be expected to be high. |
ICLR | Title
Mesh-free Eulerian Physics-Informed Neural Networks
Abstract
Physics-informed Neural Networks (PINNs) have recently emerged as a principled way to include prior physical knowledge in form of partial differential equations (PDEs) into neural networks. Although PINNs are generally viewed as mesh-free, current approaches still rely on collocation points within a bounded region, even in settings with spatially sparse signals. Furthermore, if the boundaries are not known, the selection of such a region is difficult and often results in a large proportion of collocation points being selected in areas of low relevance. To resolve this severe drawback of current methods, we present a mesh-free and adaptive approach termed particle-density PINN (pdPINN), which is inspired by the microscopic viewpoint of fluid dynamics. The method is based on the Eulerian formulation and, different from classical mesh-free method, does not require the introduction of Lagrangian updates. We propose to sample directly from the distribution over the particle positions, eliminating the need to introduce boundaries while adaptively focusing on the most relevant regions. This is achieved by interpreting a nonnegative physical quantity (such as the density or temperature) as an unnormalized probability distribution from which we sample with dynamic Monte Carlo methods. The proposed method leads to higher sample efficiency and improved performance of PINNs. These advantages are demonstrated on various experiments based on the continuity equations, Fokker-Planck equations, and the heat equation.
1 INTRODUCTION
Many phenomena in physics are commonly described by partial differential equations (PDEs) which give rise to complex dynamical systems but often lack tractable analytical solutions. Important examples can be found for instance in fluid dynamics with typical applications in the design of gas and steam turbines (Oosthuizen & Carscallen, 2013), as well as modeling the collective motion of self-driven particles (Marchetti et al., 2013) such as flocks of birds or bacteria colonies (Szabó et al., 2006; Nussbaumer et al., 2021). Despite the relevant progress in establishing numerical PDE solvers, such as finite element and finite volume methods, the seamless incorporation of data remains an open problem (Freitag, 2020). To fill this gap, Physics-informed Neural Networks (PINNs) have emerged as an attractive alternative to classical methods for data-based forward and inverse solving of PDEs.
The general idea of PINNs is to use the expressive power of modern neural architectures for solving partial differential equations (PDEs) in a data-driven way by minimizing a PDE-based loss, cf. Raissi et al. (2019). Consider parameterized PDEs of the general form
f(t,x|λ) := ∂tu(t,x) + P (u|λ) = 0, (1) where P is a non-linear operator parameterized by λ, and ∂t is the partial time derivative w.r.t. t ∈ [0, T ]. The position x ∈ Ω is defined on a spatial domain Ω ⊆ Rd. The PDE is subject to initial condition g0 u(0,x) = g0(x) (2) for x ∈ Ω, and boundary conditions g∂Ω
u(t,x) = g∂Ω(x) (3) for x ∈ ∂Ω and t ∈ [0, T ]. The main idea of PINNs consists in approximating u(t,x) (and hence f(t,x)) with a neural network given a small set of N noisy observations uobs
u(t(i),x(i)) + ϵ(i) = u (i) obs (4)
with noise ϵ(i) ≪ u(i) ∀i ∈ {0, 1, . . . , N}. This allows us to consider the following two important problem settings: If λ is known, the PDE is fully specified, and we aim to find a solution u in a data-driven manner by training a neural network. The PDE takes the role of a regularizer, where the particular physical laws provide our prior information. A second setting considers the inverse learning of the parameters λ by including them into the optimization process in order to infer physical properties such as the viscosity coefficient of a fluid (Jagtap et al., 2020). Initial work on solving time-independent PDEs with neural networks with such PDE-based penalties was pioneered by Dissanayake & Phan-Thien (1994) and van Milligen et al. (1995), with later adoptions such as Parisi et al. (2003) extending it to non-steady and time-dependent settings.
Loss functions. Typically, PINNs approximate f(t,x) by the network fΘ(t,x) in which the parameters Θ are adjusted by minimizing the combined loss of (i) reconstructing available observations (Lobs), (ii) softly enforcing the PDE constraints on the domain (Lf ), and (iii) fulfilling the boundary (Lb) and initial conditions (Linit), i.e.
Θ = argmin Θ [w1Lobs(X, t,uobs,Θ) + w2Lf (Θ) + w3Lb(Θ) + w4Linit(Θ)] , (5)
with loss weights wi ∈ R≥0. A common choice for Lobs, Lb, and Linit is the expected L2 loss, approximated via the average L2 loss over the observations and via sampled boundary and initial conditions, respectively. It should be noted that the formulation of the forward and inverse problem are identical in this setting, as observations and initial conditions are implemented in a similar manner.
Enforcing the PDE. Although PINNs are by nature mesh-free, the PDE loss Lf in Eq. 5 used for the soft enforcement of Eq. 1 requires a similar discretization step for approximating an integral over the continuous signal domain,
Lf (Θ)= 1
|[0, T ]× Ω| T∫ t=0 ∫ Ω ||fΘ(t,x)||22dx dt=Ep(t,x) [ ||fΘ(t,x)||22 ] ≈ 1 n n∑ i=1 ||fΘ(ti,xi)||22 (6)
with p(t,x) being supported on [0, T ]× Ω. The points {(t(j),x(j))}nj=1 ⊂ [0, T ]× Ω on which the PDE loss is evaluated are commonly referred to as collocation points. This formulation of PINNs for solving Eq. 1 is an Eulerian one, as the function fΘ is updated by evaluating the PDE with respect to collocation points fixed in space. Initial approaches for selecting the collocation points in PINNs relied on a fixed grid (Lagaris et al., 1998; Rudd, 2013; Lagaris et al., 2000), followed up by work proposing stochastic estimates of the integral via (Quasi-) Monte Carlo methods (Sirignano & Spiliopoulos, 2018; Lu et al., 2021; Chen et al., 2019) or Latin Hypercube sampling (Raissi et al., 2019). However, these approaches to Eulerian PINNs cannot be directly applied if there are no known boundaries or boundary conditions, e.g. for Ω = Rd. Additionally, problems can arise if the constrained region is large compared to the area of interest. Considering for example the shock wave (of a compressible gas) in a comparably large space, most collocation points would fall into areas of low density. We argue that due to the locality of particle interactions, the regions with higher density are more relevant for regularizing the network.
To address these shortcomings of previous methods, we propose a mesh-free and adaptive approach for sampling collocation points, illustrated on the example of compressible fluids. By changing p(t,x) to the distribution over the particle positions in the fluid we effectively change the loss functional in Eq. 6. We then generalize to other settings, such as thermodynamics, by interpreting a positive, scalar quantity of interest with a finite integral as a particle density. Within this work we specifically focus on PDEs that can be derived based on local particle interactions or can be shown to be equivalent to such a view, as for example is the case for the heat equation with its connection to particle diffusion. Notably, we do not require the introduction of Lagrangian updates, as classical mesh-free methods do, which would be based on evaluating the PDE with respect to moving particles (see also section 2).
Main contributions. The main contributions of this paper are as follows:
• We demonstrate that PINNs with uniform sampling strategies (and refinement methods based on uniform proposals) fail in settings with spatially sparse signals as well as in unbounded signal domains; these problems can severely degrade the network’s predictive performance.
• In order to overcome these limitations of existing approaches, we propose a truly mesh-free version of Eulerian PINNs, in which the collocation points are sampled using physicsmotivated MCMC methods. By staying within the Eulerian framework, we avoid conceptual challenges of classical mesh-free methods based on Lagrangian updates such as the enforcement of boundary conditions.
• The proposed model is applicable to a huge range of dynamical systems governed by PDEs that share an underlying microscopic particle description, such as several hydrodynamic, electro- and thermo-dynamic problems.
• We rigorously evaluate and compare our proposed method with existing approaches in high-dimensional settings. Compared to existing mesh refinement methods, significantly fewer collocation points are required to achieve similar or better predictive performances, while still being more flexible.
2 RELATED WORK
Mesh-Free Fluid Dynamics. Classical mesh-free approaches in computational fluid dynamics are based on non-parametric function representations, with Smoothed Particle Hydrodynamics (SPH) (Lind et al., 2020; Gingold & Monaghan, 1977) being the most prominent example. In SPH, fluid properties such as the density and pressure are represented by a discrete set of particles and interpolated using a smoothing kernel function. For updating the function forward in time, the particles have to be propagated according to the Lagrangian formulation of the PDE, relying on the kernel for computing spatial derivatives. One of the benefits of such a representation is that mass is conserved by construction. However, Lagrangian updates become challenging when enforcing boundary conditions, requiring the introduction of ad-hoc "dummy" or "mirror" particles (Lind et al., 2020). Instead, we present a mesh-free, particle-based, PINN that does not require Lagrangian updates, and is already applicable in the Eulerian formulation. It should be noted that the proposed pdPINNs can in principle be combined with Lagrangian updates such as proposed by Raissi et al. (2019) and later by Wessels et al. (2020). But as the intention of this work is to improve upon current Eulerian PINNs, we refer to future work for the comparison and extension to the Lagrangian formalism.
Alternative Meshes and Losses for PINNs. Recent work proposes local refinement methods for PINNs by adding more samples within regions of high error (Lu et al., 2021; Tadiparthi & Bhattacharya, 2021). Residual adaptive refinement (RAR) is suggested by Lu et al. (2021), which is based on regularly evaluating the PDE loss on a set of uniformly drawn samples. The locations corresponding to the highest PDE loss are then added to the set of collocation points used in training. Tadiparthi & Bhattacharya (2021, preprint) further enhance RAR by learning a linear map between the uniform distribution and the distribution over the PDE loss by optimizing an optimal transport objective. By sampling uniformly and subsequently transforming these samples, it is attempted to focus on regions of higher error. Due to the conceptual similarity to RAR, we will denote this method as "OT-RAR". The work of Nabian et al. (2021) explores Importance Sampling based on the (unnormalized) proposal distribution ||fΘ(t,x)||22 for a more sample efficient evaluation of Eq. 6. Samples are drawn using a variation of Inverse Transform sampling (Steele, 1987).
However, in all these cases the underlying mechanism for exploring regions of high error is based on (quasi-) uniform sampling within the boundaries. As such, they do not resolve the issues of unknown boundaries and will furthermore be infeasible in higher dimensions.
Kinetic Theory: From particles to PDEs. Kinetic theory shows that essential conservation laws of fluids can be derived from a microscopic (or molecular) viewpoint (Born & Green, 1946). Interactions describing the dynamics of a fluid are described starting from a set of individual particles. The basis of this approach is the so-called molecular distribution function Ψ over phase space, i.e. Ψ(t,x,v) such that ∫
∆x ∫ ∆v Ψ(t,x,v)dvdx (7)
is the probability that a molecule with a velocity within ∆v = ∆v1∆v2∆v3 occupies the volume ∆x = ∆x1∆x2∆x3. Based on this distribution function, it is possible to define common quantities
as the (mass or particle) density, (local mean) velocity, and macroscopic PDEs by considering the local interactions of individual particles. The one-particle phase space is commonly known from its application in the Boltzmann equation for modelling two-body interactions describing gases (Green, 1956) and active matter (e.g. flocks of birds) (Bertin et al., 2006). The more general form including higher interaction terms is necessary for deriving conservation laws of liquids (Born & Green, 1946).
3 PARTICLE-DENSITY PINNS
In this section we introduce the concept of mesh-free particle-density PINNs (pdPINNs). Firstly, we examine limitations of the common PDE loss in Eq. 6 and, secondly, we present a solution by integrating over the position of particles instead of the full support of the signal domain.
The underlying assumption of our approach is that the dynamics described by the PDE can be explained in terms of local interactions of particles. This is the case, for instance, for commonly considered dynamics of gases, liquids or active particles (Hoover & Hoover, 2003; Toner & Tu, 1995).
Existing limitations of Eulerian PINNs. Consider the problem of modeling a (possibly non-steady) compressible fluid, i.e. a fluid with a spatially and temporally evolving density ρ(t,x) and velocity v(t,x). For the sake of notational brevity, we will denote these by ρ and v. Given noisy observations, our particular interest lies in the prediction of particle movements, hence in the approximation of the density (and potentially other physical quantities) with a neural network ρΘ. Additional quantities such as the velocity or pressure might also be observed and modeled.
Commonly, the PDE then serves as a physics-based regularizer of the network by enforcing the PDE loss Lf in Eq. 6 during standard PINN training. For this, Lf is evaluated on a set of collocation points that are, for example, uniformly distributed on a bounded region. However, the limitations of this approach already become apparent when considering a simple advection problem defined by the following PDE: ∂tρ+ v · (∇ρ) = 0. (8) Figure 1 illustrates a one-dimensional case on the domain [0, T ] × Ω, with Ω = R, and a known constant velocity v ∝ 1. We measure the density ρ(i) at different (spatially fixed) points in time and space {(t(i),x(i))}, on which a neural network ρΘ(t,x) is trained. For optimizing the standard PDE loss Lf as given in Eq. 6, we would require a bounded region ΩB := [a, b] ⊂ Ω with a < b and a, b ∈ R. This, in turn, leads to two issues:
1. Since the moving density occupies a small subset of Ω, uniformly distributed collocation points within ΩB will enforce Eq. 8 in areas with low-density. This results in insufficient regularization of ρΘ.
2. Defining a suitable bounded region ΩB requires a priori knowledge about the solution of the PDE, which is generally not available. Choosing too tight boundaries would lead to large parts of the density moving out of the considered area ΩB. Too large boundaries would instead lead to poor regularization as this would worsen the sparsity problem in issue (1.).
In practice, most Eulerian PINNs approaches opt for naively defining a sufficiently wide region ΩB, resulting in a poor reconstruction. In the context of our advection problem, this is showcased in Figure 1b. To properly resolve the aforementioned issues, one should (i) focus on areas that have a relevant regularizing effect on the prediction of ρΘ and (ii) adapt to the fluid movements without being restricted to a predefined mesh.
Mesh-Free Eulerian PINNs. We thus propose to reformulate the PDE loss in Eq. 6 as the expectation of ||fΘ(t,x)||22 with respect to the molecular distribution Ψ(t,x) introduced in the related work section 2:
Lpd(Θ) ≈ ∫ T t=0 ∫ Ω Ψ(t,x) [ ||fΘ(t,x)||22 ] dx dt. (9)
This completely removes the need of defining ad-hoc boundaries while providing the ability to flexibly focus on highly relevant regions, i.e. those that are more densely populated. As the particle density corresponds directly to the occupation probability of a molecule Ψ(t,x) with a changed normalization constant, we can estimate Lpd via samples drawn from the normalized particle density, which is denoted as ρN . For homogeneous fluids, this coincides with the normalized mass density.
In summary, we propose to draw collocation points from the normalized density:
(ti,xi) ∼ ρN (t,x) = 1Z ρ(t,x). (10)
The true particle positions and the density ρN are however unknown in practice. Instead, we have to rely on the learned density ρΘ(t,x) as a proxy provided by the neural network. We denote the associated normalized PDF by qΘ(t,x) = 1Z′ ρΘ(t,x) with support on [0, T ]× Ω. The PDE loss is then defined as the expectation w.r.t. qΘ(t,x):
Lpd(Θ) = EqΘ(t,x) [ ||fΘ(t,x)||22 ] = ∫ T t=0 ∫ Ω qΘ(t,x) ||fΘ(x, t)||22 dx dt. (11)
In order to approximate this integral, samples need to be drawn from qΘ(t,x). This can be done in a principled way by using dynamic Monte Carlo methods, despite the fact that the normalization constant Z is unknown. We highlight that, in contrast to the mesh-based loss in Eq. 6, the loss in Eq. 11 is also suitable for problems on unbounded domains such as Ω = Rd.
Applicability of pdPINNs. Although motivated in the context of an advection problem, the proposed approach is generally applicable to a wide range of PDEs. The advection equation 8 can be seen as a special case of mass conservation (assuming ∇ · v = 0), which is one of the fundamental physical principles expressed as a continuity equation. This continuity equation relates temporal changes of the fluid density ρ to spatial changes of the flux density ρv through
∂tρ+∇ · (ρv) = 0. (12)
Another common physical process that is suited for our approach is diffusion, such as in the Heat Equation, where local interactions of particles give rise to the following PDE (as established by Fick’s second law): ∂tT − α∇2T = 0, (13) where T denotes the temperature interpreted as density, α the thermal (or mass) diffusivity, and ∇2 the Laplacian operator. By introducing additional constraints to the diffusion and mass-conservation, one can describe viscous fluids with the Navier-Stokes equations or even self-propelled, active particles, for which Toner and Tu (Toner & Tu, 1995; Tu et al., 1998; Toner & Tu, 1998) introduced
hydrodynamic equations. Other possible applications involve Maxwell’s equations for conservation of charge in electrodynamics, as well as the distribution of Brownian particles with drift described by the Fokker-Planck equations. In general, our method is applicable in settings where (i) a non-negative scalar field (with a finite integral) of interest can be interpreted as a particle density, and (ii) the local interactions of these particles give rise to the considered PDEs.
4 MODEL AND IMPLEMENTATION
A wide range of different network architectures and optimization strategies for PINNs have emerged. They emphasize well-behaved derivatives with respect to the input domain (Sitzmann et al., 2020), allow higher expressivity for modelling high frequency data (Tancik et al., 2020; Wang et al., 2021b), or resolve gradient pathologies within PINNs (Wang et al., 2021a). As our method does not rely on a specific architecture, any such improvement can be easily combined with the proposed pdPINNs. For the experiments in this submission we will use simple fully-connected networks with sinusoidal (Sitzmann et al., 2020) or tanh activations (see section 5).
Finite total density. For reformulating the predicted density ρΘ as a probability, we have to ensure non-negativity as well as a finite integral over the input domain Ω. Non-negativity can for example be achieved via a squared activation function after the last layer. An additional bounded activation function g is then added, which guarantees the output to be within a pre-specified range [0, cmax]. The integral Rd can then be enforced to be finite by multiplying the bounded output with a Gaussian kernel. Summarizing these three steps, let ρ̃Θ denote the output of the last layer of our fully connected neural network and pgauss(x) = N (x;µ,Σ), then we predict the density ρΘ as
ρΘ(t,x) = pgauss(x) g(ρ̃Θ(t,x) 2) ≤ cmaxpgauss(x). (14)
In practice, the choice of cmax does not affect the model as long as it is sufficiently large. The used mean µ and covariance Σ are maximum likelihood estimates based on the observations x, i.e. the sample mean x̄ and covariance Σ̄ of the sensor locations. To allow more flexibility in the network, we add a scaled identity matrix to the covariance Σ = Σ̄ + c · I , which can be set to a large value for solving PDEs when only initial conditions, but no observations, are available.
Markov chain Monte Carlo (MCMC) sampling. Finally, MCMC methods allow us to draw samples from the unnormalized density ρΘ(t,x). We consider several MCMC samplers and emphasize that the wide range of well-established methods offer the ability to use a specialized sampler for the considered problem, if the need may arise. Gradient-based samplers such as Hamiltonian Monte Carlo (Duane et al., 1987; Betancourt, 2017) are particularly suited for our setting, as the gradients of ρΘ with respect to the input space are readily available. For problems where boundaries are known and we have to sample from a constrained region, a bijective transformation is used so that the Markov chain may operate in an unconstrained space (Parno & Marzouk, 2018). In our experience, both Metropolis Hastings and Hamiltonian Monte Carlo already worked sufficiently well for a wide range of PDEs without requiring much fine-tuning. We highlight that pdPINNs do not directly depend on MCMC as a sampler, and alternative sampling methods such as modern variational inference schemes (Rezende & Mohamed, 2015) can also be directly used as a substitute.
For details regarding the samplers used and implementation we refer to the Experiments section 5 and Appendix section A.1.
5 EXPERIMENTS
In this section we demonstrate the advantages of pdPINNs compared to uniform sampling, importance sampling (Nabian et al., 2021) as well as the adaptive refinement methods RAR (Lu et al., 2021) and OT-RAR (Tadiparthi & Bhattacharya, 2021). Despite the term uniform sampling, we rely in all our experiments on quasi-random Sobol sequences for more stable behavior in the low samples regime. To guarantee a fair comparison, we considered slight variations of the proposed implementations of RAR and OT-RAR, so that only a limited number of collocation points are used. For the pdPINNs we consider multiple MCMC schemes, including inverse transform sampling (IT-pdPINN), MetropolisHastings (MH-pdPINN), and Hamiltonian Monte Carlo (HMC-pdPINN) methods.
The models in sections 5.1 and 5.2 are implemented in PyTorch (Paszke et al., 2019), with a custom Python implementation of the MH and Inverse Transform samplers. For the Fokker-Planck experiment in section 5.3, we make use of the efficient MCMC implementations provided by TensorFlow probability (Abadi et al., 2016; Lao et al., 2020) and the utilities of the DeepXDE library (Lu et al., 2021). More details, as well as further experiments comparing the wall-time of the various samplers, are provided in the Appendix with the code being provided in the supplementary material.
5.1 MASS CONSERVATION FOR SIMULATED PARTICLES
As a challenging prediction task we consider a setting motivated by the real world problem of modelling bird densities and velocities measured from a set of weather radars (Dokter et al., 2011; Nussbaumer et al., 2019; 2021) – or more generally the area of radar aeroecology. A non-steady compressible fluid in three dimensions is simulated by propagating fluid parcels through a pre-defined velocity field, i.e. the fluid is simulated using the conservation of mass as the underlying PDE (see Eq. 12). To provide the network with training observations, we introduce a set of spatially fixed sensors (comparable to radars) which count over time the number of fluid parcels within a radius r and over 21 contiguous altitude layers. Another disjoint set of sensors is provided for the validation set while the test performance is evaluated on a grid. The birds-eye view of the setting is shown in Figure 2a, where circles indicate the area covered by the radars. Figure 2b additionally shows the 3D simulated data projected along the z-axis and over time. In the Appendix section A.3 we describe the data generation and training setting in detail and provide the corresponding code in the supplementary.
For modeling the density and velocity, two sinusoidal representation networks (SIREN) (Sitzmann et al., 2020) ρΘ1(t,x) and vΘ2(t,x) are used, which are then regularized by enforcing the continuity equation for the conservation of mass (see Eq. 12). To showcase the sample efficiency of pdPINNs, experiments are performed over a wide range of collocation points (256 to 65536). In each setting the PDE-weights w2 (see Eq. 5) were selected with a grid search based on the highest 1st quartile R2 in a validation set. The resulting box-plots of the test R2 are provided in Figure 3, where the “Baseline” corresponds to training without any PDE loss. The proposed pdPINN approach clearly outperforms alternative (re-)sampling methods across all numbers of collocation points. Already with very few collocation points (512) pdPINNs achieve results that require orders of magnitude more points (32768) for uniform sampling. Finally, we observe that the performance gap shrinks as the number of collocation points increases, eventually converging to the same limiting value. Even when getting close to the memory limit of a NVIDIA Titan X GPU, other sampling strategies at best achieve comparable results with pdPINNs. In the Appendix (Figure A.6) we provide an additional qualitative comparison of the mass conservation between OT-RAR and MH-pdPINN 2048 samples.
As an additional experiment we simplified the setting by projecting the data onto the xy-axis, i.e. the birds-eye view, which is a common setting for geostatistical data (e.g. in Nussbaumer et al. (2019)). The results in this 2D setting, which are provided in the Appendix (Figure A.8) and described in details in section A.3, are very similar in nature to the 3D setting, although with a smaller performance gap with respect to alternative sampling methods. This decrease of the gap is to be expected, as the lower dimensional space is much easier to explore with uniform proposals.
5.2 HEAT EQUATION
We further consider a 2D diffusion problem, namely the heat equation introduced in section 3, where randomly distributed sensors provide measurements of the temperature. We focus on a general setting with the initial conditions being zero temperature everywhere except for a specified region, as shown in Figure 4a, and we let the system evolve for t ∈ [0, 0.2]. The networks are only provided sensor measurements of the temperature; for further details see the Appendix section A.4.
Temperature predictions for PINNs with uniform sampling and pdPINNs are illustrated in Figure 4b and 4c, respectively, with the ground truth in Figure 4a. We can observe that the uniform sampling strategy does not allow to focus on the relevant parts of the domain, i.e. regions with high temperature, and that it visibly fails to reconstruct the temperature profile. In contrast, the pdPINN promotes sampling in regions of higher density and predicts the true temperature more reliably. We also evaluate quantitatively the performance of the two approaches in terms of the R2 test error over the predicted temperature and illustrate the results in the Appendix section A.4, where we again observe the same convergence between uniform sampling and pdPINNs for high numbers of collocation points.
5.3 FOKKER-PLANCK EQUATION
For a demonstration of a forward problem, i.e. a setting without any observed data but only initial conditions, we solve the Fokker-Planck (FP) equations in a setting where an analytical solution is available (cf. Särkkä & Solin (2019)). The FP equations describe the evolution of the probability density of the movement of Brownian particles under a drift. More specifically, assume we are given particles at time t0, which are distributed according to p(t0, x). Let the movements of these particles be described by the following stochastic differential equation, where Wt denotes the standard Wiener process:
dXt = µ(t,Xt) dt+ σ(t,Xt) dWt (15)
with known drift µ(Xt, t) and diffusion coefficient D(Xt, t) = σ2(Xt, t)/2. The FP equation for the probability density p(t, x) of the random variable Xt is then given by
∂ ∂t p(t, x) = − ∂ ∂x [µ(t, x)p(t, x)] +
∂2
∂x2 [D(t, x)p(t, x)] . (16)
We train a network to predict the (probability) density pΘ(t, x) given a known sinusoidal drift and constant diffusion, which are discussed in detail in the Appendix. Data is only provided for the initial condition, and the PDE loss is based on Eq. 16 within the space Ω = [−.1.5, 1.5] and time t ∈ [−1, 1]. As the analytical solution is available in form of a probability density, we can estimate the KL divergence KL(p||pΘ) to evaluate the performance. Furthermore, we can sample collocation points from the true particle distribution p(t, x) (referred to as “p(t, x) as sampler”), offering a “best case scenario” of pdPINNs. A total of 5000 collocation points were used, and weights were manually tuned based on the error on a validation set. Figure 5a shows the evolution of KL divergence during training, highlighting that pdPINN based methods require fewer steps to achieve a low divergence. In addition, sampling from the true particle distribution leads to the fastest improvement and the lowest divergence after 30000 training steps. A qualitative comparison of the results is given in Figure 5b, showing that RAR and uniform sampling fail to propagate the sine wave forward. The ground truth of the problem and wall-times for different methods are given in the Appendix section A.5.
6 CONCLUSION
In this work, we introduced a general extension to PINNs applicable to a great variety of problem settings involving physics-based regularization of neural networks. In order to overcome the limitations of classical mesh-based Eulerian PINNs, we introduce a novel PDE loss that is defined with respect to the particle density in rather general types of PDEs. By employing MCMC methods to sample collocation points from the density approximated by the network, we derive an efficient and easy-to-implement improvement for providing a more appropriate regularization objective in PINNs. In particular, our new pdPINNs are completely mesh-free, thereby overcoming severe efficiency problems of classical PINNs in high-dimensional and sparse settings. Further, the absence of a mesh allows us to elegantly handle settings with uncertain or unknown domain boundaries.
As we have demonstrated, our method is applicable to a wide spectrum of PDEs, ranging from hydrodynamic flow problems to electro- and thermo-dynamic problems, as well as more general applications of the Fokker-Planck equations.
A APPENDIX
A.1 BACKGROUND SAMPLING FOR PDPINNS
At initialization, the network prediction ρΘ is random and thus does not carry any useful information, i.e. sampling from this density would be meaningless. Therefore, we start training the pdPINNs with a warm-up phase in which samples are obtained from a pre-specified background distribution:
x ∼ pbg(t,x) = p(t)pbg(x|t) (17)
with p(t) = U(0, T ). To avoid introducing a mesh, we could rely on the previously estimated Gaussian distribution introduced in Section 4, i.e. pbg(x|t) = pgauss(x). As a second alternative, approach we consider random linear combinations of the convex hull of {x(i)}Ni=1 spanned by c data points summarized as rows of matrix Z ∈ Rc×d. This leads to x = mZ with weight m ∈ Rc which can be drawn from a Dirichlet distribution, i.e. m ∼ Dir(α = 1). Of course, a uniform sampling mechanism on a defined region is also suitable and the definitive choice depends on the data and PDE at hand. However, we found that all of these methods work well in practice.
We initially draw all samples from the background distribution, and then slowly increase the proportion of samples obtained from the particle density, as we found that leaving some background samples slightly helps in the training.
A.2 IMPLEMENTATION OF RAR AND OT-RAR
For our comparison, we considered the adaptive refinement methods RAR and OT-RAR, proposed by Lu et al. (2021) and Tadiparthi & Bhattacharya (2021, preprint). Both methods rely on consecutive refinements of a fixed grid in the initial proposal. The number of collocation points is steadily increased and collocation points once added will not be removed. To allow for a fairer comparison, we adapt both methods to use a limited budget of points, and in addition we regularly resample them. This leads to a slightly modified version of the methods which is similar in spirit. For learning the linear mapping proposed by Tadiparthi & Bhattacharya (2021), we rely on the PyOT (Flamary et al., 2021) implementation of Knott & Smith (1984). The pseudo-code for sampling a set of collocation points is given in Algorithm 1 and Algorithm 2. The required input fΘ refers to the PDE approximated by the network, as discussed in Section 1. For more specific details on the methods we refer to the original papers.
Algorithm 1 Adapted RAR Input: fΘ, uniform distribution UB,
number of col. points k, previous col. points Xprev.
Xprop ← [x1,x2, . . . ,xk]T with xi ∼ UB ▷ Sample proposals Xcomb ← concat(Xprev, Xprop) ▷ Concatenate old and new points Xnew ← topk(Xcomb, ||fΘ(Xcomb)||22, k) ▷ Keep top k proposed points based on fΘ
Output: Xnew
A.3 EXPERIMENTS: CONSERVATION OF MASS
In the supplementary material we provide code in Python for the data generation and for the pdPINN model. Below we provide the details for all the experiments we conducted. Furthermore, we provide short videos showing the predicted density movements for each different approach. More details on this can be found in the README.html provided in the supplementary files.
All experiments were run on a computing cluster using Nvidia GeForce GTX Titan X GPUs with 12 GB VRAM. Settings that required more memory were run on a RTX8000 with 48GB VRAM. Up to 16 Titan X GPUs could be used in parallel, or 4 RTX8000. In most settings, training in each experiment took less than 10 minutes.
Algorithm 2 Adapted OT-RAR Input: fΘ, uniform distribution UB,
number of col. points k, number of points for empirical distribution j < 2k, previous col. points Xprev.
Xprop ← [x1,x2, . . . ,xk]T with xi ∼ UB ▷ Sample proposals Xcomb ← concat(Xprev, Xprop) ▷ Concatenate old and new points
Xtarget ← topk(Xcomb, ||fΘ(Xcomb)||22, j) ▷ j samples for target empirical distribution Xsource ← [x1,x2, . . . ,xj ]T with xi ∼ UB ▷ j samples for source empirical distribution
MOT ← LinOT(Xsource, Xtarget) ▷ Obtain linear operator that maps to target distribution
Xnew ← [x1,x2, . . . ,xk]T with xi ∼ UB ▷ Sample uniformly Xmap ←MOT(Xnew) ▷ Map samples to target distribution
Output: Xmap
A.3.1 ADDITIONAL EXPERIMENTAL RESULTS
3D Setting. Figure A.6 showcases the projection of the density in the onto the z axis for a random run of the OT-RAR method and the Metropolis-Hastings based pdPINN when using 2048 collocation points. The OT-RAR PINN shows disconnected density predictions that clearly violate mass conservation, whereas the Metropolis Hastings based pdPINN is capable of mostly preserving it. The boxplot in Figure A.8 highlights the difference in required number of collocation points of
2D Setting. As mentioned in Section 5, we repeated the Conservation of Mass experiment in a slightly altered setting, where the data is projected onto the xy-plane, reducing it to a 2D+Time problem. The general setup is similar to the 3D setting, although a smaller network and different training parameters are used, which are listed in the following sections below.
A.3.2 DATA GENERATION
Here we provide a more detailed description for the generated data, namely the used velocity field, and the method for obtaining simulated “radar measurements”.
Velocity field. The velocity field in the xy-plane was generated from a scalar potential field Φ : R2 → R and the z-component of a vector potential a : R2 → R. Through the Helmholtz decomposition1 we can construct the velocity field vxy : R2 → R2:
vxy ([ x y ]) = −∇Φ+ [ δa/δy −δa/δx ] . (18)
For both experiments the following fields were used:
Φ ([ x y ]) = −1 2 (x− 2) · (y − 2), (19)
a ([ x y ]) = −1 5 exp ( − (2 3 x )2 − (2 3 y )2) . (20)
The derivatives were obtained using the symbolic differentiation library SymPy (Meurer et al., 2017). To add a nonsteady component, the resulting velocity field is modulated in amplitude as a function of time t ∈ [0, 3]:
vxyt
( t, [ x y ]) = vxy ([ x y ])( 3
2 ∣∣∣∣sin(23πt )∣∣∣∣+ 0.05) . (21)
The z (altitude) component of the velocity only depends on time and is given by: vz(t) = 1.6 · sin ( 4
3 πt
) . (22)
Simulation. For the initial distribution of the fluid, the particle positions were drawn from Gaussian mixtures. For t ∈ [0, 3], these particles were simulated using the above constructed velocity field. Overall, the paths of the roughly 240000 parcels were simulated using a basic backward Euler scheme.
1This is the 2D formulation of the Helmholtz decomposition, where the vector potential has non-zero components only along the z-axis as in a3d = [0, 0, a]T . The full decomposition is commonly written as v3d = −∇Φ3d +∇× a3d.
Measurements. The measurements at the sensors were obtained by counting the number of particles within a given radius over multiple timesteps. The density corresponds to the mass divided by the sensor area, and the velocity is an average over all the particle velocities. For the training data additional zero-mean isotropic Gaussian noise is added to all measurements. In the 3D setting, data measurements of density and velocity are obtained by 132 sensors on the xy-plane, within region [−3, 3]2 at 11 equidistant timesteps. In the 2D setting, the same set of sensors is used.
A.3.3 ARCHITECTURE AND TRAINING
In both experiments, the networks for density ρΘ1 and velocity vΘ2 prediction (parameterized by Θ1 and Θ2, respectively) are fully-connected layers with sinusoidal activation functions, as proposed by Sitzmann et al. (2020). The number of layers and units for each setting is shown in Table A.1. The sine frequency hyperparameter required in the SIREN architecture was tuned by hand according to the validation loss of the baseline model (i.e. without a PDE loss), leading to a sine-frequency of 12 for the 2D setting, and 5 for the 3D setting. We note that the proposed default value of 30 in Sitzmann et al. (2020) heavily overfits our relatively low-frequency data and we thus recommend an adjustment of this hyperparameter for usage in PINNs.
For training the network, the ADAM optimizer (Kingma & Ba, 2014) with a learning rate of 8×10−4 (2D Setting) or 10−4 (3D Setting) was used. The learning rate was multiplied by a factor of 0.99 each epoch. All models were trained for 300 (3D setting) or 500 (2D setting) epochs. The 2D setting was trained using full-batch gradient descent, whereas for the 3D setting we used a mini-batch size of 6931. In all experiments we trained and evaluated on 10 different random seeds.
A.4 EXPERIMENTS: HEAT EQUATION
The dataset for the heat equation experiment was generated by numerically solving the heat equation through the finite difference method, precisely the Forward Time, Centered Space (FTCS) approximation (Recktenwald, 2004). We used Dirichlet boundary conditions in form of zero temperature around a squared shape far away from the relevant domain. These boundary conditions are not provided to the PINNs for a slightly more difficult setting. Overall, the dataset is composed of 1000 training points, 1971120 test points and 492780 validation points. We made sure training points contained enough information about the initial condition, i.e. we selected a sufficient amount of points around the initial source of non-zero temperature. In contrast, validation and test points are taken uniformly in time and space. During the warm-up phase of the pdPINN training, collocation points were sampled uniformly, and afterwards 90% of the samples were drawn from the particle density distribution, which is proportional to the modeled temperature. Collocation points were re-sampled every 500 epochs. Differently from previous experiments, the employed architecture is a fully-connected two-layer neural network with 32 hidden units and tanh activations. The implementation is in PyTorch (Paszke et al., 2019), using the ADAM optimizer (Kingma & Ba, 2014) combined with an exponential learning rate scheduler which multiplies the learning rate by a factor of 0.9999 at each epoch, starting with a rate of 10−4 and decreasing it until reaching a minimum value of 10−5. Training was terminated through early-stopping, as soon as the validation R2 didn’t improve for more than 3000 epochs.
Additional results. Figure A.9 illustrates the test R2 of the predicted T averaged over 20 different seeds. Error bars correspond to 95% confidence interval for the mean estimation, based on 1000 bootstrap samples, while colors indicate the different PDE weights w2 explored. As in previous settings, we show that with few samples (16) the regularization enforced by the PDE loss is not strong
enough, leading to comparable results in both approaches (as expected). Hence PINNs and pdPINNs show similar results in this regime. However, as the number of samples increases (32-64-128-256), the PDE loss enforced by the proposed pdPINNs quickly and steadily outperforms uniform sampling. Lastly, we also verified that in the limit of high samples (512-1024) the two sampling strategies converge, as in such a low-dimensional domain the uniform samples fully and densely covers the considered area. This, again, is in line with the observed results of the other experiments.
A.5 EXPERIMENTS: FOKKER-PLANCK EQUATIONS IN TENSORFLOW
Within the Fokker-Planck experiment we showcase the different training behaviors of uniform sampling, RAR, and multiple MCMC samplers. Due to the low dimensionality of the problem, we additionally consider a Inverse-Transform (IT) sampler (Steele, 1987) for efficiently sampling from the density. The IT sampler relies on the empirical cdf estimated via uniform samples drawn over the whole domain. This method does not require building up a Markov Chain, and is thus very fast, but only works well in low dimensions.
More specifically, we compare the following methods for selecting collocation points, with a highly efficient implementation of the MCMC methods provided by TensorFlow probability:
I.) Uniform sampling II.) Residual Adaptive Refinement (Lu et al., 2021)
III.) pdPINN with Inverse-Transform (IT) sampling (Steele, 1987) IV.) pdPINN with Metropolis-Hastings (MH) MC with parallel tempering (Earl & Deem, 2005) V.) pdPINN with Hamiltonian MC (HMC) with parallel tempering (Earl & Deem, 2005) and
dual averaging step-size adaptation (Hoffman et al., 2014, section 3.2)
A.5.1 SETTING AND ANALYTICAL SOLUTION
We consider the following setting over the time interval [t0, tn] = [−1, 1] with drift function µ, noise σ and initial particle positions p(x|t = t0) given by
µ(Xt, t) = µ(t) = sin (10t) (23) σ(Xt, t) = σ = 0.06 (24)
p(x|t = t0) = N (0, 0.022 · Id) (25)
The PDE has an analytical solution (cf. Särkkä & Solin (2019)) which is given by
p(x|t) = N (µs(t), σ2s(t)) (26) p(t) = U(t0, tn) (27)
µs(t) = − cos(10t)
10 +
cos(10)
10 (28)
σ2s(t) = 0.0036t+ 0.004. (29)
For evaluating the deviation of our prediction to the solution, we evaluate the KL divergence between the analytical solution and the network approximation KL(p(x, t)|p̂Θ(x, t)) by sampling 10000 points from the true p(x, t).
A.5.2 SETUP
We use a SIREN network and additionally sample (5000) collocation points at the initial time-step, which is the default behavior of DeepXDE. An overview of the architecture and training details is given in Table A.2. Experiments were performed with a NVIDIA GeForce RTX 2080 Ti and an Intel(R) Xeon(R) CPU E5-1660 v3 @ 3.00GHz processor.
A.5.3 WALL TIME
The wall times for the different methods are provided in Figure A.10. Although Metropolis-Hastings and Hamiltonian Monte Carlo require more time per step compared to uniform sampling, the used inverse transform sampling achieves a similar speed. | 1. What is the focus and contribution of the paper on physics-informed neural networks?
2. What are the strengths and weaknesses of the proposed approach, particularly in comparison to other particle-based inference methods?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any concerns regarding the limited set of experimental evaluations and the scalability of the approach to more complex settings?
5. Do you have any suggestions for improving the paper, such as including ablation analysis/comparison to particle-based inference approaches? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors propose an extension to the physics-informed neural network approach, which utilizes a particle-based approach to more efficiently sample in the data space, and capture boundary conditions more accurately with less required samples as compared to traditional collocation points. The core contributions of the paper is the injection of microscopic fluid dynamics views rooted in smoothed particle hydrodynamics, into the mesh-free PINN approach to achieve more optimal adaptive sampling.
Strengths And Weaknesses
Strengths:
Strong embedding of the proposed approach into the current state of the literature with illustrative parallels being drawn to previous PINN approaches.
Well-motivated approach with a clear physical intuition behind it.
Weaknesses:
As the approach is particle-based I would have expected an ablation analysis/comparison to particle-based inference approaches such as Sequential Monte-Carlo. The authors only consider inverse transform sampling, Metropolis Hastings, and Hamiltonian Monte-Carlo.
Limited set of experimental evaluations with fairly simple physical systems being considered. Would the proposed approach also work on the more difficult Korteweg-de-Vries equations? Would it work with Burgers equations, and how would it compare to the residual adaptive refinement approach of Lu et al. there? Would it scale to even more complex settings, such as the ones explored in XPINNs?
Clarity, Quality, Novelty And Reproducibility
The clarity of the exposition is exemplary, with a very strong relation back to prior work in the field, and placing the proposed extension in the right, nuanced context. With previous approaches to the refinement problem in PINNs already utilizing adaptive refinement, and the previous state-of-the-art residual adaptive refinement utilizing Monte Carlo integration at its core, one is lead to question just how close to the proposed approach a particle-based sampler within residual adaptive refinement would be.
In some of the arguments there is some imprecision such as the argument that only a number of systems can be viewed as particle-based systems. In theory every physical systems can be viewed as an artificial particle-based system, but that viewpoint may not always be conducive to the search for a solution, or be able to compute desired quantities of interest.
There furthermore exist too expansive claims, which are not supported by the provided experimental evaluation. To cite "As we have demonstrated applicability to a wide spectrum of PDEs..", I believe this claim is not supported as while the experiments are chosen from different fields, they still a fraction of the potential PDE-dynamics, and as already pointed out above there exist a number of PDEs in PINN-literature which are significantly more difficult. |
ICLR | Title
Mesh-free Eulerian Physics-Informed Neural Networks
Abstract
Physics-informed Neural Networks (PINNs) have recently emerged as a principled way to include prior physical knowledge in form of partial differential equations (PDEs) into neural networks. Although PINNs are generally viewed as mesh-free, current approaches still rely on collocation points within a bounded region, even in settings with spatially sparse signals. Furthermore, if the boundaries are not known, the selection of such a region is difficult and often results in a large proportion of collocation points being selected in areas of low relevance. To resolve this severe drawback of current methods, we present a mesh-free and adaptive approach termed particle-density PINN (pdPINN), which is inspired by the microscopic viewpoint of fluid dynamics. The method is based on the Eulerian formulation and, different from classical mesh-free method, does not require the introduction of Lagrangian updates. We propose to sample directly from the distribution over the particle positions, eliminating the need to introduce boundaries while adaptively focusing on the most relevant regions. This is achieved by interpreting a nonnegative physical quantity (such as the density or temperature) as an unnormalized probability distribution from which we sample with dynamic Monte Carlo methods. The proposed method leads to higher sample efficiency and improved performance of PINNs. These advantages are demonstrated on various experiments based on the continuity equations, Fokker-Planck equations, and the heat equation.
1 INTRODUCTION
Many phenomena in physics are commonly described by partial differential equations (PDEs) which give rise to complex dynamical systems but often lack tractable analytical solutions. Important examples can be found for instance in fluid dynamics with typical applications in the design of gas and steam turbines (Oosthuizen & Carscallen, 2013), as well as modeling the collective motion of self-driven particles (Marchetti et al., 2013) such as flocks of birds or bacteria colonies (Szabó et al., 2006; Nussbaumer et al., 2021). Despite the relevant progress in establishing numerical PDE solvers, such as finite element and finite volume methods, the seamless incorporation of data remains an open problem (Freitag, 2020). To fill this gap, Physics-informed Neural Networks (PINNs) have emerged as an attractive alternative to classical methods for data-based forward and inverse solving of PDEs.
The general idea of PINNs is to use the expressive power of modern neural architectures for solving partial differential equations (PDEs) in a data-driven way by minimizing a PDE-based loss, cf. Raissi et al. (2019). Consider parameterized PDEs of the general form
f(t,x|λ) := ∂tu(t,x) + P (u|λ) = 0, (1) where P is a non-linear operator parameterized by λ, and ∂t is the partial time derivative w.r.t. t ∈ [0, T ]. The position x ∈ Ω is defined on a spatial domain Ω ⊆ Rd. The PDE is subject to initial condition g0 u(0,x) = g0(x) (2) for x ∈ Ω, and boundary conditions g∂Ω
u(t,x) = g∂Ω(x) (3) for x ∈ ∂Ω and t ∈ [0, T ]. The main idea of PINNs consists in approximating u(t,x) (and hence f(t,x)) with a neural network given a small set of N noisy observations uobs
u(t(i),x(i)) + ϵ(i) = u (i) obs (4)
with noise ϵ(i) ≪ u(i) ∀i ∈ {0, 1, . . . , N}. This allows us to consider the following two important problem settings: If λ is known, the PDE is fully specified, and we aim to find a solution u in a data-driven manner by training a neural network. The PDE takes the role of a regularizer, where the particular physical laws provide our prior information. A second setting considers the inverse learning of the parameters λ by including them into the optimization process in order to infer physical properties such as the viscosity coefficient of a fluid (Jagtap et al., 2020). Initial work on solving time-independent PDEs with neural networks with such PDE-based penalties was pioneered by Dissanayake & Phan-Thien (1994) and van Milligen et al. (1995), with later adoptions such as Parisi et al. (2003) extending it to non-steady and time-dependent settings.
Loss functions. Typically, PINNs approximate f(t,x) by the network fΘ(t,x) in which the parameters Θ are adjusted by minimizing the combined loss of (i) reconstructing available observations (Lobs), (ii) softly enforcing the PDE constraints on the domain (Lf ), and (iii) fulfilling the boundary (Lb) and initial conditions (Linit), i.e.
Θ = argmin Θ [w1Lobs(X, t,uobs,Θ) + w2Lf (Θ) + w3Lb(Θ) + w4Linit(Θ)] , (5)
with loss weights wi ∈ R≥0. A common choice for Lobs, Lb, and Linit is the expected L2 loss, approximated via the average L2 loss over the observations and via sampled boundary and initial conditions, respectively. It should be noted that the formulation of the forward and inverse problem are identical in this setting, as observations and initial conditions are implemented in a similar manner.
Enforcing the PDE. Although PINNs are by nature mesh-free, the PDE loss Lf in Eq. 5 used for the soft enforcement of Eq. 1 requires a similar discretization step for approximating an integral over the continuous signal domain,
Lf (Θ)= 1
|[0, T ]× Ω| T∫ t=0 ∫ Ω ||fΘ(t,x)||22dx dt=Ep(t,x) [ ||fΘ(t,x)||22 ] ≈ 1 n n∑ i=1 ||fΘ(ti,xi)||22 (6)
with p(t,x) being supported on [0, T ]× Ω. The points {(t(j),x(j))}nj=1 ⊂ [0, T ]× Ω on which the PDE loss is evaluated are commonly referred to as collocation points. This formulation of PINNs for solving Eq. 1 is an Eulerian one, as the function fΘ is updated by evaluating the PDE with respect to collocation points fixed in space. Initial approaches for selecting the collocation points in PINNs relied on a fixed grid (Lagaris et al., 1998; Rudd, 2013; Lagaris et al., 2000), followed up by work proposing stochastic estimates of the integral via (Quasi-) Monte Carlo methods (Sirignano & Spiliopoulos, 2018; Lu et al., 2021; Chen et al., 2019) or Latin Hypercube sampling (Raissi et al., 2019). However, these approaches to Eulerian PINNs cannot be directly applied if there are no known boundaries or boundary conditions, e.g. for Ω = Rd. Additionally, problems can arise if the constrained region is large compared to the area of interest. Considering for example the shock wave (of a compressible gas) in a comparably large space, most collocation points would fall into areas of low density. We argue that due to the locality of particle interactions, the regions with higher density are more relevant for regularizing the network.
To address these shortcomings of previous methods, we propose a mesh-free and adaptive approach for sampling collocation points, illustrated on the example of compressible fluids. By changing p(t,x) to the distribution over the particle positions in the fluid we effectively change the loss functional in Eq. 6. We then generalize to other settings, such as thermodynamics, by interpreting a positive, scalar quantity of interest with a finite integral as a particle density. Within this work we specifically focus on PDEs that can be derived based on local particle interactions or can be shown to be equivalent to such a view, as for example is the case for the heat equation with its connection to particle diffusion. Notably, we do not require the introduction of Lagrangian updates, as classical mesh-free methods do, which would be based on evaluating the PDE with respect to moving particles (see also section 2).
Main contributions. The main contributions of this paper are as follows:
• We demonstrate that PINNs with uniform sampling strategies (and refinement methods based on uniform proposals) fail in settings with spatially sparse signals as well as in unbounded signal domains; these problems can severely degrade the network’s predictive performance.
• In order to overcome these limitations of existing approaches, we propose a truly mesh-free version of Eulerian PINNs, in which the collocation points are sampled using physicsmotivated MCMC methods. By staying within the Eulerian framework, we avoid conceptual challenges of classical mesh-free methods based on Lagrangian updates such as the enforcement of boundary conditions.
• The proposed model is applicable to a huge range of dynamical systems governed by PDEs that share an underlying microscopic particle description, such as several hydrodynamic, electro- and thermo-dynamic problems.
• We rigorously evaluate and compare our proposed method with existing approaches in high-dimensional settings. Compared to existing mesh refinement methods, significantly fewer collocation points are required to achieve similar or better predictive performances, while still being more flexible.
2 RELATED WORK
Mesh-Free Fluid Dynamics. Classical mesh-free approaches in computational fluid dynamics are based on non-parametric function representations, with Smoothed Particle Hydrodynamics (SPH) (Lind et al., 2020; Gingold & Monaghan, 1977) being the most prominent example. In SPH, fluid properties such as the density and pressure are represented by a discrete set of particles and interpolated using a smoothing kernel function. For updating the function forward in time, the particles have to be propagated according to the Lagrangian formulation of the PDE, relying on the kernel for computing spatial derivatives. One of the benefits of such a representation is that mass is conserved by construction. However, Lagrangian updates become challenging when enforcing boundary conditions, requiring the introduction of ad-hoc "dummy" or "mirror" particles (Lind et al., 2020). Instead, we present a mesh-free, particle-based, PINN that does not require Lagrangian updates, and is already applicable in the Eulerian formulation. It should be noted that the proposed pdPINNs can in principle be combined with Lagrangian updates such as proposed by Raissi et al. (2019) and later by Wessels et al. (2020). But as the intention of this work is to improve upon current Eulerian PINNs, we refer to future work for the comparison and extension to the Lagrangian formalism.
Alternative Meshes and Losses for PINNs. Recent work proposes local refinement methods for PINNs by adding more samples within regions of high error (Lu et al., 2021; Tadiparthi & Bhattacharya, 2021). Residual adaptive refinement (RAR) is suggested by Lu et al. (2021), which is based on regularly evaluating the PDE loss on a set of uniformly drawn samples. The locations corresponding to the highest PDE loss are then added to the set of collocation points used in training. Tadiparthi & Bhattacharya (2021, preprint) further enhance RAR by learning a linear map between the uniform distribution and the distribution over the PDE loss by optimizing an optimal transport objective. By sampling uniformly and subsequently transforming these samples, it is attempted to focus on regions of higher error. Due to the conceptual similarity to RAR, we will denote this method as "OT-RAR". The work of Nabian et al. (2021) explores Importance Sampling based on the (unnormalized) proposal distribution ||fΘ(t,x)||22 for a more sample efficient evaluation of Eq. 6. Samples are drawn using a variation of Inverse Transform sampling (Steele, 1987).
However, in all these cases the underlying mechanism for exploring regions of high error is based on (quasi-) uniform sampling within the boundaries. As such, they do not resolve the issues of unknown boundaries and will furthermore be infeasible in higher dimensions.
Kinetic Theory: From particles to PDEs. Kinetic theory shows that essential conservation laws of fluids can be derived from a microscopic (or molecular) viewpoint (Born & Green, 1946). Interactions describing the dynamics of a fluid are described starting from a set of individual particles. The basis of this approach is the so-called molecular distribution function Ψ over phase space, i.e. Ψ(t,x,v) such that ∫
∆x ∫ ∆v Ψ(t,x,v)dvdx (7)
is the probability that a molecule with a velocity within ∆v = ∆v1∆v2∆v3 occupies the volume ∆x = ∆x1∆x2∆x3. Based on this distribution function, it is possible to define common quantities
as the (mass or particle) density, (local mean) velocity, and macroscopic PDEs by considering the local interactions of individual particles. The one-particle phase space is commonly known from its application in the Boltzmann equation for modelling two-body interactions describing gases (Green, 1956) and active matter (e.g. flocks of birds) (Bertin et al., 2006). The more general form including higher interaction terms is necessary for deriving conservation laws of liquids (Born & Green, 1946).
3 PARTICLE-DENSITY PINNS
In this section we introduce the concept of mesh-free particle-density PINNs (pdPINNs). Firstly, we examine limitations of the common PDE loss in Eq. 6 and, secondly, we present a solution by integrating over the position of particles instead of the full support of the signal domain.
The underlying assumption of our approach is that the dynamics described by the PDE can be explained in terms of local interactions of particles. This is the case, for instance, for commonly considered dynamics of gases, liquids or active particles (Hoover & Hoover, 2003; Toner & Tu, 1995).
Existing limitations of Eulerian PINNs. Consider the problem of modeling a (possibly non-steady) compressible fluid, i.e. a fluid with a spatially and temporally evolving density ρ(t,x) and velocity v(t,x). For the sake of notational brevity, we will denote these by ρ and v. Given noisy observations, our particular interest lies in the prediction of particle movements, hence in the approximation of the density (and potentially other physical quantities) with a neural network ρΘ. Additional quantities such as the velocity or pressure might also be observed and modeled.
Commonly, the PDE then serves as a physics-based regularizer of the network by enforcing the PDE loss Lf in Eq. 6 during standard PINN training. For this, Lf is evaluated on a set of collocation points that are, for example, uniformly distributed on a bounded region. However, the limitations of this approach already become apparent when considering a simple advection problem defined by the following PDE: ∂tρ+ v · (∇ρ) = 0. (8) Figure 1 illustrates a one-dimensional case on the domain [0, T ] × Ω, with Ω = R, and a known constant velocity v ∝ 1. We measure the density ρ(i) at different (spatially fixed) points in time and space {(t(i),x(i))}, on which a neural network ρΘ(t,x) is trained. For optimizing the standard PDE loss Lf as given in Eq. 6, we would require a bounded region ΩB := [a, b] ⊂ Ω with a < b and a, b ∈ R. This, in turn, leads to two issues:
1. Since the moving density occupies a small subset of Ω, uniformly distributed collocation points within ΩB will enforce Eq. 8 in areas with low-density. This results in insufficient regularization of ρΘ.
2. Defining a suitable bounded region ΩB requires a priori knowledge about the solution of the PDE, which is generally not available. Choosing too tight boundaries would lead to large parts of the density moving out of the considered area ΩB. Too large boundaries would instead lead to poor regularization as this would worsen the sparsity problem in issue (1.).
In practice, most Eulerian PINNs approaches opt for naively defining a sufficiently wide region ΩB, resulting in a poor reconstruction. In the context of our advection problem, this is showcased in Figure 1b. To properly resolve the aforementioned issues, one should (i) focus on areas that have a relevant regularizing effect on the prediction of ρΘ and (ii) adapt to the fluid movements without being restricted to a predefined mesh.
Mesh-Free Eulerian PINNs. We thus propose to reformulate the PDE loss in Eq. 6 as the expectation of ||fΘ(t,x)||22 with respect to the molecular distribution Ψ(t,x) introduced in the related work section 2:
Lpd(Θ) ≈ ∫ T t=0 ∫ Ω Ψ(t,x) [ ||fΘ(t,x)||22 ] dx dt. (9)
This completely removes the need of defining ad-hoc boundaries while providing the ability to flexibly focus on highly relevant regions, i.e. those that are more densely populated. As the particle density corresponds directly to the occupation probability of a molecule Ψ(t,x) with a changed normalization constant, we can estimate Lpd via samples drawn from the normalized particle density, which is denoted as ρN . For homogeneous fluids, this coincides with the normalized mass density.
In summary, we propose to draw collocation points from the normalized density:
(ti,xi) ∼ ρN (t,x) = 1Z ρ(t,x). (10)
The true particle positions and the density ρN are however unknown in practice. Instead, we have to rely on the learned density ρΘ(t,x) as a proxy provided by the neural network. We denote the associated normalized PDF by qΘ(t,x) = 1Z′ ρΘ(t,x) with support on [0, T ]× Ω. The PDE loss is then defined as the expectation w.r.t. qΘ(t,x):
Lpd(Θ) = EqΘ(t,x) [ ||fΘ(t,x)||22 ] = ∫ T t=0 ∫ Ω qΘ(t,x) ||fΘ(x, t)||22 dx dt. (11)
In order to approximate this integral, samples need to be drawn from qΘ(t,x). This can be done in a principled way by using dynamic Monte Carlo methods, despite the fact that the normalization constant Z is unknown. We highlight that, in contrast to the mesh-based loss in Eq. 6, the loss in Eq. 11 is also suitable for problems on unbounded domains such as Ω = Rd.
Applicability of pdPINNs. Although motivated in the context of an advection problem, the proposed approach is generally applicable to a wide range of PDEs. The advection equation 8 can be seen as a special case of mass conservation (assuming ∇ · v = 0), which is one of the fundamental physical principles expressed as a continuity equation. This continuity equation relates temporal changes of the fluid density ρ to spatial changes of the flux density ρv through
∂tρ+∇ · (ρv) = 0. (12)
Another common physical process that is suited for our approach is diffusion, such as in the Heat Equation, where local interactions of particles give rise to the following PDE (as established by Fick’s second law): ∂tT − α∇2T = 0, (13) where T denotes the temperature interpreted as density, α the thermal (or mass) diffusivity, and ∇2 the Laplacian operator. By introducing additional constraints to the diffusion and mass-conservation, one can describe viscous fluids with the Navier-Stokes equations or even self-propelled, active particles, for which Toner and Tu (Toner & Tu, 1995; Tu et al., 1998; Toner & Tu, 1998) introduced
hydrodynamic equations. Other possible applications involve Maxwell’s equations for conservation of charge in electrodynamics, as well as the distribution of Brownian particles with drift described by the Fokker-Planck equations. In general, our method is applicable in settings where (i) a non-negative scalar field (with a finite integral) of interest can be interpreted as a particle density, and (ii) the local interactions of these particles give rise to the considered PDEs.
4 MODEL AND IMPLEMENTATION
A wide range of different network architectures and optimization strategies for PINNs have emerged. They emphasize well-behaved derivatives with respect to the input domain (Sitzmann et al., 2020), allow higher expressivity for modelling high frequency data (Tancik et al., 2020; Wang et al., 2021b), or resolve gradient pathologies within PINNs (Wang et al., 2021a). As our method does not rely on a specific architecture, any such improvement can be easily combined with the proposed pdPINNs. For the experiments in this submission we will use simple fully-connected networks with sinusoidal (Sitzmann et al., 2020) or tanh activations (see section 5).
Finite total density. For reformulating the predicted density ρΘ as a probability, we have to ensure non-negativity as well as a finite integral over the input domain Ω. Non-negativity can for example be achieved via a squared activation function after the last layer. An additional bounded activation function g is then added, which guarantees the output to be within a pre-specified range [0, cmax]. The integral Rd can then be enforced to be finite by multiplying the bounded output with a Gaussian kernel. Summarizing these three steps, let ρ̃Θ denote the output of the last layer of our fully connected neural network and pgauss(x) = N (x;µ,Σ), then we predict the density ρΘ as
ρΘ(t,x) = pgauss(x) g(ρ̃Θ(t,x) 2) ≤ cmaxpgauss(x). (14)
In practice, the choice of cmax does not affect the model as long as it is sufficiently large. The used mean µ and covariance Σ are maximum likelihood estimates based on the observations x, i.e. the sample mean x̄ and covariance Σ̄ of the sensor locations. To allow more flexibility in the network, we add a scaled identity matrix to the covariance Σ = Σ̄ + c · I , which can be set to a large value for solving PDEs when only initial conditions, but no observations, are available.
Markov chain Monte Carlo (MCMC) sampling. Finally, MCMC methods allow us to draw samples from the unnormalized density ρΘ(t,x). We consider several MCMC samplers and emphasize that the wide range of well-established methods offer the ability to use a specialized sampler for the considered problem, if the need may arise. Gradient-based samplers such as Hamiltonian Monte Carlo (Duane et al., 1987; Betancourt, 2017) are particularly suited for our setting, as the gradients of ρΘ with respect to the input space are readily available. For problems where boundaries are known and we have to sample from a constrained region, a bijective transformation is used so that the Markov chain may operate in an unconstrained space (Parno & Marzouk, 2018). In our experience, both Metropolis Hastings and Hamiltonian Monte Carlo already worked sufficiently well for a wide range of PDEs without requiring much fine-tuning. We highlight that pdPINNs do not directly depend on MCMC as a sampler, and alternative sampling methods such as modern variational inference schemes (Rezende & Mohamed, 2015) can also be directly used as a substitute.
For details regarding the samplers used and implementation we refer to the Experiments section 5 and Appendix section A.1.
5 EXPERIMENTS
In this section we demonstrate the advantages of pdPINNs compared to uniform sampling, importance sampling (Nabian et al., 2021) as well as the adaptive refinement methods RAR (Lu et al., 2021) and OT-RAR (Tadiparthi & Bhattacharya, 2021). Despite the term uniform sampling, we rely in all our experiments on quasi-random Sobol sequences for more stable behavior in the low samples regime. To guarantee a fair comparison, we considered slight variations of the proposed implementations of RAR and OT-RAR, so that only a limited number of collocation points are used. For the pdPINNs we consider multiple MCMC schemes, including inverse transform sampling (IT-pdPINN), MetropolisHastings (MH-pdPINN), and Hamiltonian Monte Carlo (HMC-pdPINN) methods.
The models in sections 5.1 and 5.2 are implemented in PyTorch (Paszke et al., 2019), with a custom Python implementation of the MH and Inverse Transform samplers. For the Fokker-Planck experiment in section 5.3, we make use of the efficient MCMC implementations provided by TensorFlow probability (Abadi et al., 2016; Lao et al., 2020) and the utilities of the DeepXDE library (Lu et al., 2021). More details, as well as further experiments comparing the wall-time of the various samplers, are provided in the Appendix with the code being provided in the supplementary material.
5.1 MASS CONSERVATION FOR SIMULATED PARTICLES
As a challenging prediction task we consider a setting motivated by the real world problem of modelling bird densities and velocities measured from a set of weather radars (Dokter et al., 2011; Nussbaumer et al., 2019; 2021) – or more generally the area of radar aeroecology. A non-steady compressible fluid in three dimensions is simulated by propagating fluid parcels through a pre-defined velocity field, i.e. the fluid is simulated using the conservation of mass as the underlying PDE (see Eq. 12). To provide the network with training observations, we introduce a set of spatially fixed sensors (comparable to radars) which count over time the number of fluid parcels within a radius r and over 21 contiguous altitude layers. Another disjoint set of sensors is provided for the validation set while the test performance is evaluated on a grid. The birds-eye view of the setting is shown in Figure 2a, where circles indicate the area covered by the radars. Figure 2b additionally shows the 3D simulated data projected along the z-axis and over time. In the Appendix section A.3 we describe the data generation and training setting in detail and provide the corresponding code in the supplementary.
For modeling the density and velocity, two sinusoidal representation networks (SIREN) (Sitzmann et al., 2020) ρΘ1(t,x) and vΘ2(t,x) are used, which are then regularized by enforcing the continuity equation for the conservation of mass (see Eq. 12). To showcase the sample efficiency of pdPINNs, experiments are performed over a wide range of collocation points (256 to 65536). In each setting the PDE-weights w2 (see Eq. 5) were selected with a grid search based on the highest 1st quartile R2 in a validation set. The resulting box-plots of the test R2 are provided in Figure 3, where the “Baseline” corresponds to training without any PDE loss. The proposed pdPINN approach clearly outperforms alternative (re-)sampling methods across all numbers of collocation points. Already with very few collocation points (512) pdPINNs achieve results that require orders of magnitude more points (32768) for uniform sampling. Finally, we observe that the performance gap shrinks as the number of collocation points increases, eventually converging to the same limiting value. Even when getting close to the memory limit of a NVIDIA Titan X GPU, other sampling strategies at best achieve comparable results with pdPINNs. In the Appendix (Figure A.6) we provide an additional qualitative comparison of the mass conservation between OT-RAR and MH-pdPINN 2048 samples.
As an additional experiment we simplified the setting by projecting the data onto the xy-axis, i.e. the birds-eye view, which is a common setting for geostatistical data (e.g. in Nussbaumer et al. (2019)). The results in this 2D setting, which are provided in the Appendix (Figure A.8) and described in details in section A.3, are very similar in nature to the 3D setting, although with a smaller performance gap with respect to alternative sampling methods. This decrease of the gap is to be expected, as the lower dimensional space is much easier to explore with uniform proposals.
5.2 HEAT EQUATION
We further consider a 2D diffusion problem, namely the heat equation introduced in section 3, where randomly distributed sensors provide measurements of the temperature. We focus on a general setting with the initial conditions being zero temperature everywhere except for a specified region, as shown in Figure 4a, and we let the system evolve for t ∈ [0, 0.2]. The networks are only provided sensor measurements of the temperature; for further details see the Appendix section A.4.
Temperature predictions for PINNs with uniform sampling and pdPINNs are illustrated in Figure 4b and 4c, respectively, with the ground truth in Figure 4a. We can observe that the uniform sampling strategy does not allow to focus on the relevant parts of the domain, i.e. regions with high temperature, and that it visibly fails to reconstruct the temperature profile. In contrast, the pdPINN promotes sampling in regions of higher density and predicts the true temperature more reliably. We also evaluate quantitatively the performance of the two approaches in terms of the R2 test error over the predicted temperature and illustrate the results in the Appendix section A.4, where we again observe the same convergence between uniform sampling and pdPINNs for high numbers of collocation points.
5.3 FOKKER-PLANCK EQUATION
For a demonstration of a forward problem, i.e. a setting without any observed data but only initial conditions, we solve the Fokker-Planck (FP) equations in a setting where an analytical solution is available (cf. Särkkä & Solin (2019)). The FP equations describe the evolution of the probability density of the movement of Brownian particles under a drift. More specifically, assume we are given particles at time t0, which are distributed according to p(t0, x). Let the movements of these particles be described by the following stochastic differential equation, where Wt denotes the standard Wiener process:
dXt = µ(t,Xt) dt+ σ(t,Xt) dWt (15)
with known drift µ(Xt, t) and diffusion coefficient D(Xt, t) = σ2(Xt, t)/2. The FP equation for the probability density p(t, x) of the random variable Xt is then given by
∂ ∂t p(t, x) = − ∂ ∂x [µ(t, x)p(t, x)] +
∂2
∂x2 [D(t, x)p(t, x)] . (16)
We train a network to predict the (probability) density pΘ(t, x) given a known sinusoidal drift and constant diffusion, which are discussed in detail in the Appendix. Data is only provided for the initial condition, and the PDE loss is based on Eq. 16 within the space Ω = [−.1.5, 1.5] and time t ∈ [−1, 1]. As the analytical solution is available in form of a probability density, we can estimate the KL divergence KL(p||pΘ) to evaluate the performance. Furthermore, we can sample collocation points from the true particle distribution p(t, x) (referred to as “p(t, x) as sampler”), offering a “best case scenario” of pdPINNs. A total of 5000 collocation points were used, and weights were manually tuned based on the error on a validation set. Figure 5a shows the evolution of KL divergence during training, highlighting that pdPINN based methods require fewer steps to achieve a low divergence. In addition, sampling from the true particle distribution leads to the fastest improvement and the lowest divergence after 30000 training steps. A qualitative comparison of the results is given in Figure 5b, showing that RAR and uniform sampling fail to propagate the sine wave forward. The ground truth of the problem and wall-times for different methods are given in the Appendix section A.5.
6 CONCLUSION
In this work, we introduced a general extension to PINNs applicable to a great variety of problem settings involving physics-based regularization of neural networks. In order to overcome the limitations of classical mesh-based Eulerian PINNs, we introduce a novel PDE loss that is defined with respect to the particle density in rather general types of PDEs. By employing MCMC methods to sample collocation points from the density approximated by the network, we derive an efficient and easy-to-implement improvement for providing a more appropriate regularization objective in PINNs. In particular, our new pdPINNs are completely mesh-free, thereby overcoming severe efficiency problems of classical PINNs in high-dimensional and sparse settings. Further, the absence of a mesh allows us to elegantly handle settings with uncertain or unknown domain boundaries.
As we have demonstrated, our method is applicable to a wide spectrum of PDEs, ranging from hydrodynamic flow problems to electro- and thermo-dynamic problems, as well as more general applications of the Fokker-Planck equations.
A APPENDIX
A.1 BACKGROUND SAMPLING FOR PDPINNS
At initialization, the network prediction ρΘ is random and thus does not carry any useful information, i.e. sampling from this density would be meaningless. Therefore, we start training the pdPINNs with a warm-up phase in which samples are obtained from a pre-specified background distribution:
x ∼ pbg(t,x) = p(t)pbg(x|t) (17)
with p(t) = U(0, T ). To avoid introducing a mesh, we could rely on the previously estimated Gaussian distribution introduced in Section 4, i.e. pbg(x|t) = pgauss(x). As a second alternative, approach we consider random linear combinations of the convex hull of {x(i)}Ni=1 spanned by c data points summarized as rows of matrix Z ∈ Rc×d. This leads to x = mZ with weight m ∈ Rc which can be drawn from a Dirichlet distribution, i.e. m ∼ Dir(α = 1). Of course, a uniform sampling mechanism on a defined region is also suitable and the definitive choice depends on the data and PDE at hand. However, we found that all of these methods work well in practice.
We initially draw all samples from the background distribution, and then slowly increase the proportion of samples obtained from the particle density, as we found that leaving some background samples slightly helps in the training.
A.2 IMPLEMENTATION OF RAR AND OT-RAR
For our comparison, we considered the adaptive refinement methods RAR and OT-RAR, proposed by Lu et al. (2021) and Tadiparthi & Bhattacharya (2021, preprint). Both methods rely on consecutive refinements of a fixed grid in the initial proposal. The number of collocation points is steadily increased and collocation points once added will not be removed. To allow for a fairer comparison, we adapt both methods to use a limited budget of points, and in addition we regularly resample them. This leads to a slightly modified version of the methods which is similar in spirit. For learning the linear mapping proposed by Tadiparthi & Bhattacharya (2021), we rely on the PyOT (Flamary et al., 2021) implementation of Knott & Smith (1984). The pseudo-code for sampling a set of collocation points is given in Algorithm 1 and Algorithm 2. The required input fΘ refers to the PDE approximated by the network, as discussed in Section 1. For more specific details on the methods we refer to the original papers.
Algorithm 1 Adapted RAR Input: fΘ, uniform distribution UB,
number of col. points k, previous col. points Xprev.
Xprop ← [x1,x2, . . . ,xk]T with xi ∼ UB ▷ Sample proposals Xcomb ← concat(Xprev, Xprop) ▷ Concatenate old and new points Xnew ← topk(Xcomb, ||fΘ(Xcomb)||22, k) ▷ Keep top k proposed points based on fΘ
Output: Xnew
A.3 EXPERIMENTS: CONSERVATION OF MASS
In the supplementary material we provide code in Python for the data generation and for the pdPINN model. Below we provide the details for all the experiments we conducted. Furthermore, we provide short videos showing the predicted density movements for each different approach. More details on this can be found in the README.html provided in the supplementary files.
All experiments were run on a computing cluster using Nvidia GeForce GTX Titan X GPUs with 12 GB VRAM. Settings that required more memory were run on a RTX8000 with 48GB VRAM. Up to 16 Titan X GPUs could be used in parallel, or 4 RTX8000. In most settings, training in each experiment took less than 10 minutes.
Algorithm 2 Adapted OT-RAR Input: fΘ, uniform distribution UB,
number of col. points k, number of points for empirical distribution j < 2k, previous col. points Xprev.
Xprop ← [x1,x2, . . . ,xk]T with xi ∼ UB ▷ Sample proposals Xcomb ← concat(Xprev, Xprop) ▷ Concatenate old and new points
Xtarget ← topk(Xcomb, ||fΘ(Xcomb)||22, j) ▷ j samples for target empirical distribution Xsource ← [x1,x2, . . . ,xj ]T with xi ∼ UB ▷ j samples for source empirical distribution
MOT ← LinOT(Xsource, Xtarget) ▷ Obtain linear operator that maps to target distribution
Xnew ← [x1,x2, . . . ,xk]T with xi ∼ UB ▷ Sample uniformly Xmap ←MOT(Xnew) ▷ Map samples to target distribution
Output: Xmap
A.3.1 ADDITIONAL EXPERIMENTAL RESULTS
3D Setting. Figure A.6 showcases the projection of the density in the onto the z axis for a random run of the OT-RAR method and the Metropolis-Hastings based pdPINN when using 2048 collocation points. The OT-RAR PINN shows disconnected density predictions that clearly violate mass conservation, whereas the Metropolis Hastings based pdPINN is capable of mostly preserving it. The boxplot in Figure A.8 highlights the difference in required number of collocation points of
2D Setting. As mentioned in Section 5, we repeated the Conservation of Mass experiment in a slightly altered setting, where the data is projected onto the xy-plane, reducing it to a 2D+Time problem. The general setup is similar to the 3D setting, although a smaller network and different training parameters are used, which are listed in the following sections below.
A.3.2 DATA GENERATION
Here we provide a more detailed description for the generated data, namely the used velocity field, and the method for obtaining simulated “radar measurements”.
Velocity field. The velocity field in the xy-plane was generated from a scalar potential field Φ : R2 → R and the z-component of a vector potential a : R2 → R. Through the Helmholtz decomposition1 we can construct the velocity field vxy : R2 → R2:
vxy ([ x y ]) = −∇Φ+ [ δa/δy −δa/δx ] . (18)
For both experiments the following fields were used:
Φ ([ x y ]) = −1 2 (x− 2) · (y − 2), (19)
a ([ x y ]) = −1 5 exp ( − (2 3 x )2 − (2 3 y )2) . (20)
The derivatives were obtained using the symbolic differentiation library SymPy (Meurer et al., 2017). To add a nonsteady component, the resulting velocity field is modulated in amplitude as a function of time t ∈ [0, 3]:
vxyt
( t, [ x y ]) = vxy ([ x y ])( 3
2 ∣∣∣∣sin(23πt )∣∣∣∣+ 0.05) . (21)
The z (altitude) component of the velocity only depends on time and is given by: vz(t) = 1.6 · sin ( 4
3 πt
) . (22)
Simulation. For the initial distribution of the fluid, the particle positions were drawn from Gaussian mixtures. For t ∈ [0, 3], these particles were simulated using the above constructed velocity field. Overall, the paths of the roughly 240000 parcels were simulated using a basic backward Euler scheme.
1This is the 2D formulation of the Helmholtz decomposition, where the vector potential has non-zero components only along the z-axis as in a3d = [0, 0, a]T . The full decomposition is commonly written as v3d = −∇Φ3d +∇× a3d.
Measurements. The measurements at the sensors were obtained by counting the number of particles within a given radius over multiple timesteps. The density corresponds to the mass divided by the sensor area, and the velocity is an average over all the particle velocities. For the training data additional zero-mean isotropic Gaussian noise is added to all measurements. In the 3D setting, data measurements of density and velocity are obtained by 132 sensors on the xy-plane, within region [−3, 3]2 at 11 equidistant timesteps. In the 2D setting, the same set of sensors is used.
A.3.3 ARCHITECTURE AND TRAINING
In both experiments, the networks for density ρΘ1 and velocity vΘ2 prediction (parameterized by Θ1 and Θ2, respectively) are fully-connected layers with sinusoidal activation functions, as proposed by Sitzmann et al. (2020). The number of layers and units for each setting is shown in Table A.1. The sine frequency hyperparameter required in the SIREN architecture was tuned by hand according to the validation loss of the baseline model (i.e. without a PDE loss), leading to a sine-frequency of 12 for the 2D setting, and 5 for the 3D setting. We note that the proposed default value of 30 in Sitzmann et al. (2020) heavily overfits our relatively low-frequency data and we thus recommend an adjustment of this hyperparameter for usage in PINNs.
For training the network, the ADAM optimizer (Kingma & Ba, 2014) with a learning rate of 8×10−4 (2D Setting) or 10−4 (3D Setting) was used. The learning rate was multiplied by a factor of 0.99 each epoch. All models were trained for 300 (3D setting) or 500 (2D setting) epochs. The 2D setting was trained using full-batch gradient descent, whereas for the 3D setting we used a mini-batch size of 6931. In all experiments we trained and evaluated on 10 different random seeds.
A.4 EXPERIMENTS: HEAT EQUATION
The dataset for the heat equation experiment was generated by numerically solving the heat equation through the finite difference method, precisely the Forward Time, Centered Space (FTCS) approximation (Recktenwald, 2004). We used Dirichlet boundary conditions in form of zero temperature around a squared shape far away from the relevant domain. These boundary conditions are not provided to the PINNs for a slightly more difficult setting. Overall, the dataset is composed of 1000 training points, 1971120 test points and 492780 validation points. We made sure training points contained enough information about the initial condition, i.e. we selected a sufficient amount of points around the initial source of non-zero temperature. In contrast, validation and test points are taken uniformly in time and space. During the warm-up phase of the pdPINN training, collocation points were sampled uniformly, and afterwards 90% of the samples were drawn from the particle density distribution, which is proportional to the modeled temperature. Collocation points were re-sampled every 500 epochs. Differently from previous experiments, the employed architecture is a fully-connected two-layer neural network with 32 hidden units and tanh activations. The implementation is in PyTorch (Paszke et al., 2019), using the ADAM optimizer (Kingma & Ba, 2014) combined with an exponential learning rate scheduler which multiplies the learning rate by a factor of 0.9999 at each epoch, starting with a rate of 10−4 and decreasing it until reaching a minimum value of 10−5. Training was terminated through early-stopping, as soon as the validation R2 didn’t improve for more than 3000 epochs.
Additional results. Figure A.9 illustrates the test R2 of the predicted T averaged over 20 different seeds. Error bars correspond to 95% confidence interval for the mean estimation, based on 1000 bootstrap samples, while colors indicate the different PDE weights w2 explored. As in previous settings, we show that with few samples (16) the regularization enforced by the PDE loss is not strong
enough, leading to comparable results in both approaches (as expected). Hence PINNs and pdPINNs show similar results in this regime. However, as the number of samples increases (32-64-128-256), the PDE loss enforced by the proposed pdPINNs quickly and steadily outperforms uniform sampling. Lastly, we also verified that in the limit of high samples (512-1024) the two sampling strategies converge, as in such a low-dimensional domain the uniform samples fully and densely covers the considered area. This, again, is in line with the observed results of the other experiments.
A.5 EXPERIMENTS: FOKKER-PLANCK EQUATIONS IN TENSORFLOW
Within the Fokker-Planck experiment we showcase the different training behaviors of uniform sampling, RAR, and multiple MCMC samplers. Due to the low dimensionality of the problem, we additionally consider a Inverse-Transform (IT) sampler (Steele, 1987) for efficiently sampling from the density. The IT sampler relies on the empirical cdf estimated via uniform samples drawn over the whole domain. This method does not require building up a Markov Chain, and is thus very fast, but only works well in low dimensions.
More specifically, we compare the following methods for selecting collocation points, with a highly efficient implementation of the MCMC methods provided by TensorFlow probability:
I.) Uniform sampling II.) Residual Adaptive Refinement (Lu et al., 2021)
III.) pdPINN with Inverse-Transform (IT) sampling (Steele, 1987) IV.) pdPINN with Metropolis-Hastings (MH) MC with parallel tempering (Earl & Deem, 2005) V.) pdPINN with Hamiltonian MC (HMC) with parallel tempering (Earl & Deem, 2005) and
dual averaging step-size adaptation (Hoffman et al., 2014, section 3.2)
A.5.1 SETTING AND ANALYTICAL SOLUTION
We consider the following setting over the time interval [t0, tn] = [−1, 1] with drift function µ, noise σ and initial particle positions p(x|t = t0) given by
µ(Xt, t) = µ(t) = sin (10t) (23) σ(Xt, t) = σ = 0.06 (24)
p(x|t = t0) = N (0, 0.022 · Id) (25)
The PDE has an analytical solution (cf. Särkkä & Solin (2019)) which is given by
p(x|t) = N (µs(t), σ2s(t)) (26) p(t) = U(t0, tn) (27)
µs(t) = − cos(10t)
10 +
cos(10)
10 (28)
σ2s(t) = 0.0036t+ 0.004. (29)
For evaluating the deviation of our prediction to the solution, we evaluate the KL divergence between the analytical solution and the network approximation KL(p(x, t)|p̂Θ(x, t)) by sampling 10000 points from the true p(x, t).
A.5.2 SETUP
We use a SIREN network and additionally sample (5000) collocation points at the initial time-step, which is the default behavior of DeepXDE. An overview of the architecture and training details is given in Table A.2. Experiments were performed with a NVIDIA GeForce RTX 2080 Ti and an Intel(R) Xeon(R) CPU E5-1660 v3 @ 3.00GHz processor.
A.5.3 WALL TIME
The wall times for the different methods are provided in Figure A.10. Although Metropolis-Hastings and Hamiltonian Monte Carlo require more time per step compared to uniform sampling, the used inverse transform sampling achieves a similar speed. | 1. What is the focus and contribution of the paper regarding PINN construction for partial differential equations?
2. What are the strengths of the proposed approach, particularly in its implementation and performance compared to other methods?
3. Do you have any concerns or questions regarding the method's limitation to micro to macro phenomena and the dependency on the initial choice of distribution?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors consider a modification of the PINN construction for solving certain classes of partial differential equations. Namely, they focus on PDEs formulated as a macroscopic description of an underlying microscopic physical process, such as diffusion and advection. They claim that the soft constraint on the PINN loss which informs the network of the original PDE is usually taken as uniformly discretized, which is a poor approximation when the true underlying solution distribution of collocation points is not uniformly spread. They suggest replacing the uniform sampling with a Monte Carlo sampling method, in which the distribution itself is updated during training. They claim that this method resolves the issues of unbalanced density solutions and the need to define a bounding region for the PDE loss. The authors show improved performance over several other collocation point sampling methods, such as uniform, Residual adaptive refinement and some variants, as well as importance sampling.
Strengths And Weaknesses
Strengths:
The paper is well written, clearly explains the objective and modifications made to existing methods.
The presented method is straightforward to implement, and seems to outperform other methods in similar setups.
Weaknesses/Questions:
The suggested method seems to apply only to micro to macro phenomena, however PDEs are ubiquitous in other scenarios, making this solution very restrictive (albeit interesting).
"We argue that due to the locality of particle interactions, the regions with higher density are more relevant for regularizing the network." This statement is made as motivation for the class of PDEs under study, but it is not proven.
How dependent is the adaptation process on the initial choice of distribution?
While the authors compare their results against other PINN based solvers, it is unclear how the performance stands against other non-PINN methods.
Clarity, Quality, Novelty And Reproducibility
The paper is clearly written. It seems that some related work may be missing, for instance - https://www.researchgate.net/publication/359932814_Monte_Carlo_PINNs_deep_learning_approach_for_forward_and_inverse_problems_involving_high_dimensional_fractional_partial_differential_equations seems to have the same line of inquiry, while they focus on fractional PDEs. It is possible that other works exist employing MCMC in similar ways. |
ICLR | Title
Mesh-free Eulerian Physics-Informed Neural Networks
Abstract
Physics-informed Neural Networks (PINNs) have recently emerged as a principled way to include prior physical knowledge in form of partial differential equations (PDEs) into neural networks. Although PINNs are generally viewed as mesh-free, current approaches still rely on collocation points within a bounded region, even in settings with spatially sparse signals. Furthermore, if the boundaries are not known, the selection of such a region is difficult and often results in a large proportion of collocation points being selected in areas of low relevance. To resolve this severe drawback of current methods, we present a mesh-free and adaptive approach termed particle-density PINN (pdPINN), which is inspired by the microscopic viewpoint of fluid dynamics. The method is based on the Eulerian formulation and, different from classical mesh-free method, does not require the introduction of Lagrangian updates. We propose to sample directly from the distribution over the particle positions, eliminating the need to introduce boundaries while adaptively focusing on the most relevant regions. This is achieved by interpreting a nonnegative physical quantity (such as the density or temperature) as an unnormalized probability distribution from which we sample with dynamic Monte Carlo methods. The proposed method leads to higher sample efficiency and improved performance of PINNs. These advantages are demonstrated on various experiments based on the continuity equations, Fokker-Planck equations, and the heat equation.
1 INTRODUCTION
Many phenomena in physics are commonly described by partial differential equations (PDEs) which give rise to complex dynamical systems but often lack tractable analytical solutions. Important examples can be found for instance in fluid dynamics with typical applications in the design of gas and steam turbines (Oosthuizen & Carscallen, 2013), as well as modeling the collective motion of self-driven particles (Marchetti et al., 2013) such as flocks of birds or bacteria colonies (Szabó et al., 2006; Nussbaumer et al., 2021). Despite the relevant progress in establishing numerical PDE solvers, such as finite element and finite volume methods, the seamless incorporation of data remains an open problem (Freitag, 2020). To fill this gap, Physics-informed Neural Networks (PINNs) have emerged as an attractive alternative to classical methods for data-based forward and inverse solving of PDEs.
The general idea of PINNs is to use the expressive power of modern neural architectures for solving partial differential equations (PDEs) in a data-driven way by minimizing a PDE-based loss, cf. Raissi et al. (2019). Consider parameterized PDEs of the general form
f(t,x|λ) := ∂tu(t,x) + P (u|λ) = 0, (1) where P is a non-linear operator parameterized by λ, and ∂t is the partial time derivative w.r.t. t ∈ [0, T ]. The position x ∈ Ω is defined on a spatial domain Ω ⊆ Rd. The PDE is subject to initial condition g0 u(0,x) = g0(x) (2) for x ∈ Ω, and boundary conditions g∂Ω
u(t,x) = g∂Ω(x) (3) for x ∈ ∂Ω and t ∈ [0, T ]. The main idea of PINNs consists in approximating u(t,x) (and hence f(t,x)) with a neural network given a small set of N noisy observations uobs
u(t(i),x(i)) + ϵ(i) = u (i) obs (4)
with noise ϵ(i) ≪ u(i) ∀i ∈ {0, 1, . . . , N}. This allows us to consider the following two important problem settings: If λ is known, the PDE is fully specified, and we aim to find a solution u in a data-driven manner by training a neural network. The PDE takes the role of a regularizer, where the particular physical laws provide our prior information. A second setting considers the inverse learning of the parameters λ by including them into the optimization process in order to infer physical properties such as the viscosity coefficient of a fluid (Jagtap et al., 2020). Initial work on solving time-independent PDEs with neural networks with such PDE-based penalties was pioneered by Dissanayake & Phan-Thien (1994) and van Milligen et al. (1995), with later adoptions such as Parisi et al. (2003) extending it to non-steady and time-dependent settings.
Loss functions. Typically, PINNs approximate f(t,x) by the network fΘ(t,x) in which the parameters Θ are adjusted by minimizing the combined loss of (i) reconstructing available observations (Lobs), (ii) softly enforcing the PDE constraints on the domain (Lf ), and (iii) fulfilling the boundary (Lb) and initial conditions (Linit), i.e.
Θ = argmin Θ [w1Lobs(X, t,uobs,Θ) + w2Lf (Θ) + w3Lb(Θ) + w4Linit(Θ)] , (5)
with loss weights wi ∈ R≥0. A common choice for Lobs, Lb, and Linit is the expected L2 loss, approximated via the average L2 loss over the observations and via sampled boundary and initial conditions, respectively. It should be noted that the formulation of the forward and inverse problem are identical in this setting, as observations and initial conditions are implemented in a similar manner.
Enforcing the PDE. Although PINNs are by nature mesh-free, the PDE loss Lf in Eq. 5 used for the soft enforcement of Eq. 1 requires a similar discretization step for approximating an integral over the continuous signal domain,
Lf (Θ)= 1
|[0, T ]× Ω| T∫ t=0 ∫ Ω ||fΘ(t,x)||22dx dt=Ep(t,x) [ ||fΘ(t,x)||22 ] ≈ 1 n n∑ i=1 ||fΘ(ti,xi)||22 (6)
with p(t,x) being supported on [0, T ]× Ω. The points {(t(j),x(j))}nj=1 ⊂ [0, T ]× Ω on which the PDE loss is evaluated are commonly referred to as collocation points. This formulation of PINNs for solving Eq. 1 is an Eulerian one, as the function fΘ is updated by evaluating the PDE with respect to collocation points fixed in space. Initial approaches for selecting the collocation points in PINNs relied on a fixed grid (Lagaris et al., 1998; Rudd, 2013; Lagaris et al., 2000), followed up by work proposing stochastic estimates of the integral via (Quasi-) Monte Carlo methods (Sirignano & Spiliopoulos, 2018; Lu et al., 2021; Chen et al., 2019) or Latin Hypercube sampling (Raissi et al., 2019). However, these approaches to Eulerian PINNs cannot be directly applied if there are no known boundaries or boundary conditions, e.g. for Ω = Rd. Additionally, problems can arise if the constrained region is large compared to the area of interest. Considering for example the shock wave (of a compressible gas) in a comparably large space, most collocation points would fall into areas of low density. We argue that due to the locality of particle interactions, the regions with higher density are more relevant for regularizing the network.
To address these shortcomings of previous methods, we propose a mesh-free and adaptive approach for sampling collocation points, illustrated on the example of compressible fluids. By changing p(t,x) to the distribution over the particle positions in the fluid we effectively change the loss functional in Eq. 6. We then generalize to other settings, such as thermodynamics, by interpreting a positive, scalar quantity of interest with a finite integral as a particle density. Within this work we specifically focus on PDEs that can be derived based on local particle interactions or can be shown to be equivalent to such a view, as for example is the case for the heat equation with its connection to particle diffusion. Notably, we do not require the introduction of Lagrangian updates, as classical mesh-free methods do, which would be based on evaluating the PDE with respect to moving particles (see also section 2).
Main contributions. The main contributions of this paper are as follows:
• We demonstrate that PINNs with uniform sampling strategies (and refinement methods based on uniform proposals) fail in settings with spatially sparse signals as well as in unbounded signal domains; these problems can severely degrade the network’s predictive performance.
• In order to overcome these limitations of existing approaches, we propose a truly mesh-free version of Eulerian PINNs, in which the collocation points are sampled using physicsmotivated MCMC methods. By staying within the Eulerian framework, we avoid conceptual challenges of classical mesh-free methods based on Lagrangian updates such as the enforcement of boundary conditions.
• The proposed model is applicable to a huge range of dynamical systems governed by PDEs that share an underlying microscopic particle description, such as several hydrodynamic, electro- and thermo-dynamic problems.
• We rigorously evaluate and compare our proposed method with existing approaches in high-dimensional settings. Compared to existing mesh refinement methods, significantly fewer collocation points are required to achieve similar or better predictive performances, while still being more flexible.
2 RELATED WORK
Mesh-Free Fluid Dynamics. Classical mesh-free approaches in computational fluid dynamics are based on non-parametric function representations, with Smoothed Particle Hydrodynamics (SPH) (Lind et al., 2020; Gingold & Monaghan, 1977) being the most prominent example. In SPH, fluid properties such as the density and pressure are represented by a discrete set of particles and interpolated using a smoothing kernel function. For updating the function forward in time, the particles have to be propagated according to the Lagrangian formulation of the PDE, relying on the kernel for computing spatial derivatives. One of the benefits of such a representation is that mass is conserved by construction. However, Lagrangian updates become challenging when enforcing boundary conditions, requiring the introduction of ad-hoc "dummy" or "mirror" particles (Lind et al., 2020). Instead, we present a mesh-free, particle-based, PINN that does not require Lagrangian updates, and is already applicable in the Eulerian formulation. It should be noted that the proposed pdPINNs can in principle be combined with Lagrangian updates such as proposed by Raissi et al. (2019) and later by Wessels et al. (2020). But as the intention of this work is to improve upon current Eulerian PINNs, we refer to future work for the comparison and extension to the Lagrangian formalism.
Alternative Meshes and Losses for PINNs. Recent work proposes local refinement methods for PINNs by adding more samples within regions of high error (Lu et al., 2021; Tadiparthi & Bhattacharya, 2021). Residual adaptive refinement (RAR) is suggested by Lu et al. (2021), which is based on regularly evaluating the PDE loss on a set of uniformly drawn samples. The locations corresponding to the highest PDE loss are then added to the set of collocation points used in training. Tadiparthi & Bhattacharya (2021, preprint) further enhance RAR by learning a linear map between the uniform distribution and the distribution over the PDE loss by optimizing an optimal transport objective. By sampling uniformly and subsequently transforming these samples, it is attempted to focus on regions of higher error. Due to the conceptual similarity to RAR, we will denote this method as "OT-RAR". The work of Nabian et al. (2021) explores Importance Sampling based on the (unnormalized) proposal distribution ||fΘ(t,x)||22 for a more sample efficient evaluation of Eq. 6. Samples are drawn using a variation of Inverse Transform sampling (Steele, 1987).
However, in all these cases the underlying mechanism for exploring regions of high error is based on (quasi-) uniform sampling within the boundaries. As such, they do not resolve the issues of unknown boundaries and will furthermore be infeasible in higher dimensions.
Kinetic Theory: From particles to PDEs. Kinetic theory shows that essential conservation laws of fluids can be derived from a microscopic (or molecular) viewpoint (Born & Green, 1946). Interactions describing the dynamics of a fluid are described starting from a set of individual particles. The basis of this approach is the so-called molecular distribution function Ψ over phase space, i.e. Ψ(t,x,v) such that ∫
∆x ∫ ∆v Ψ(t,x,v)dvdx (7)
is the probability that a molecule with a velocity within ∆v = ∆v1∆v2∆v3 occupies the volume ∆x = ∆x1∆x2∆x3. Based on this distribution function, it is possible to define common quantities
as the (mass or particle) density, (local mean) velocity, and macroscopic PDEs by considering the local interactions of individual particles. The one-particle phase space is commonly known from its application in the Boltzmann equation for modelling two-body interactions describing gases (Green, 1956) and active matter (e.g. flocks of birds) (Bertin et al., 2006). The more general form including higher interaction terms is necessary for deriving conservation laws of liquids (Born & Green, 1946).
3 PARTICLE-DENSITY PINNS
In this section we introduce the concept of mesh-free particle-density PINNs (pdPINNs). Firstly, we examine limitations of the common PDE loss in Eq. 6 and, secondly, we present a solution by integrating over the position of particles instead of the full support of the signal domain.
The underlying assumption of our approach is that the dynamics described by the PDE can be explained in terms of local interactions of particles. This is the case, for instance, for commonly considered dynamics of gases, liquids or active particles (Hoover & Hoover, 2003; Toner & Tu, 1995).
Existing limitations of Eulerian PINNs. Consider the problem of modeling a (possibly non-steady) compressible fluid, i.e. a fluid with a spatially and temporally evolving density ρ(t,x) and velocity v(t,x). For the sake of notational brevity, we will denote these by ρ and v. Given noisy observations, our particular interest lies in the prediction of particle movements, hence in the approximation of the density (and potentially other physical quantities) with a neural network ρΘ. Additional quantities such as the velocity or pressure might also be observed and modeled.
Commonly, the PDE then serves as a physics-based regularizer of the network by enforcing the PDE loss Lf in Eq. 6 during standard PINN training. For this, Lf is evaluated on a set of collocation points that are, for example, uniformly distributed on a bounded region. However, the limitations of this approach already become apparent when considering a simple advection problem defined by the following PDE: ∂tρ+ v · (∇ρ) = 0. (8) Figure 1 illustrates a one-dimensional case on the domain [0, T ] × Ω, with Ω = R, and a known constant velocity v ∝ 1. We measure the density ρ(i) at different (spatially fixed) points in time and space {(t(i),x(i))}, on which a neural network ρΘ(t,x) is trained. For optimizing the standard PDE loss Lf as given in Eq. 6, we would require a bounded region ΩB := [a, b] ⊂ Ω with a < b and a, b ∈ R. This, in turn, leads to two issues:
1. Since the moving density occupies a small subset of Ω, uniformly distributed collocation points within ΩB will enforce Eq. 8 in areas with low-density. This results in insufficient regularization of ρΘ.
2. Defining a suitable bounded region ΩB requires a priori knowledge about the solution of the PDE, which is generally not available. Choosing too tight boundaries would lead to large parts of the density moving out of the considered area ΩB. Too large boundaries would instead lead to poor regularization as this would worsen the sparsity problem in issue (1.).
In practice, most Eulerian PINNs approaches opt for naively defining a sufficiently wide region ΩB, resulting in a poor reconstruction. In the context of our advection problem, this is showcased in Figure 1b. To properly resolve the aforementioned issues, one should (i) focus on areas that have a relevant regularizing effect on the prediction of ρΘ and (ii) adapt to the fluid movements without being restricted to a predefined mesh.
Mesh-Free Eulerian PINNs. We thus propose to reformulate the PDE loss in Eq. 6 as the expectation of ||fΘ(t,x)||22 with respect to the molecular distribution Ψ(t,x) introduced in the related work section 2:
Lpd(Θ) ≈ ∫ T t=0 ∫ Ω Ψ(t,x) [ ||fΘ(t,x)||22 ] dx dt. (9)
This completely removes the need of defining ad-hoc boundaries while providing the ability to flexibly focus on highly relevant regions, i.e. those that are more densely populated. As the particle density corresponds directly to the occupation probability of a molecule Ψ(t,x) with a changed normalization constant, we can estimate Lpd via samples drawn from the normalized particle density, which is denoted as ρN . For homogeneous fluids, this coincides with the normalized mass density.
In summary, we propose to draw collocation points from the normalized density:
(ti,xi) ∼ ρN (t,x) = 1Z ρ(t,x). (10)
The true particle positions and the density ρN are however unknown in practice. Instead, we have to rely on the learned density ρΘ(t,x) as a proxy provided by the neural network. We denote the associated normalized PDF by qΘ(t,x) = 1Z′ ρΘ(t,x) with support on [0, T ]× Ω. The PDE loss is then defined as the expectation w.r.t. qΘ(t,x):
Lpd(Θ) = EqΘ(t,x) [ ||fΘ(t,x)||22 ] = ∫ T t=0 ∫ Ω qΘ(t,x) ||fΘ(x, t)||22 dx dt. (11)
In order to approximate this integral, samples need to be drawn from qΘ(t,x). This can be done in a principled way by using dynamic Monte Carlo methods, despite the fact that the normalization constant Z is unknown. We highlight that, in contrast to the mesh-based loss in Eq. 6, the loss in Eq. 11 is also suitable for problems on unbounded domains such as Ω = Rd.
Applicability of pdPINNs. Although motivated in the context of an advection problem, the proposed approach is generally applicable to a wide range of PDEs. The advection equation 8 can be seen as a special case of mass conservation (assuming ∇ · v = 0), which is one of the fundamental physical principles expressed as a continuity equation. This continuity equation relates temporal changes of the fluid density ρ to spatial changes of the flux density ρv through
∂tρ+∇ · (ρv) = 0. (12)
Another common physical process that is suited for our approach is diffusion, such as in the Heat Equation, where local interactions of particles give rise to the following PDE (as established by Fick’s second law): ∂tT − α∇2T = 0, (13) where T denotes the temperature interpreted as density, α the thermal (or mass) diffusivity, and ∇2 the Laplacian operator. By introducing additional constraints to the diffusion and mass-conservation, one can describe viscous fluids with the Navier-Stokes equations or even self-propelled, active particles, for which Toner and Tu (Toner & Tu, 1995; Tu et al., 1998; Toner & Tu, 1998) introduced
hydrodynamic equations. Other possible applications involve Maxwell’s equations for conservation of charge in electrodynamics, as well as the distribution of Brownian particles with drift described by the Fokker-Planck equations. In general, our method is applicable in settings where (i) a non-negative scalar field (with a finite integral) of interest can be interpreted as a particle density, and (ii) the local interactions of these particles give rise to the considered PDEs.
4 MODEL AND IMPLEMENTATION
A wide range of different network architectures and optimization strategies for PINNs have emerged. They emphasize well-behaved derivatives with respect to the input domain (Sitzmann et al., 2020), allow higher expressivity for modelling high frequency data (Tancik et al., 2020; Wang et al., 2021b), or resolve gradient pathologies within PINNs (Wang et al., 2021a). As our method does not rely on a specific architecture, any such improvement can be easily combined with the proposed pdPINNs. For the experiments in this submission we will use simple fully-connected networks with sinusoidal (Sitzmann et al., 2020) or tanh activations (see section 5).
Finite total density. For reformulating the predicted density ρΘ as a probability, we have to ensure non-negativity as well as a finite integral over the input domain Ω. Non-negativity can for example be achieved via a squared activation function after the last layer. An additional bounded activation function g is then added, which guarantees the output to be within a pre-specified range [0, cmax]. The integral Rd can then be enforced to be finite by multiplying the bounded output with a Gaussian kernel. Summarizing these three steps, let ρ̃Θ denote the output of the last layer of our fully connected neural network and pgauss(x) = N (x;µ,Σ), then we predict the density ρΘ as
ρΘ(t,x) = pgauss(x) g(ρ̃Θ(t,x) 2) ≤ cmaxpgauss(x). (14)
In practice, the choice of cmax does not affect the model as long as it is sufficiently large. The used mean µ and covariance Σ are maximum likelihood estimates based on the observations x, i.e. the sample mean x̄ and covariance Σ̄ of the sensor locations. To allow more flexibility in the network, we add a scaled identity matrix to the covariance Σ = Σ̄ + c · I , which can be set to a large value for solving PDEs when only initial conditions, but no observations, are available.
Markov chain Monte Carlo (MCMC) sampling. Finally, MCMC methods allow us to draw samples from the unnormalized density ρΘ(t,x). We consider several MCMC samplers and emphasize that the wide range of well-established methods offer the ability to use a specialized sampler for the considered problem, if the need may arise. Gradient-based samplers such as Hamiltonian Monte Carlo (Duane et al., 1987; Betancourt, 2017) are particularly suited for our setting, as the gradients of ρΘ with respect to the input space are readily available. For problems where boundaries are known and we have to sample from a constrained region, a bijective transformation is used so that the Markov chain may operate in an unconstrained space (Parno & Marzouk, 2018). In our experience, both Metropolis Hastings and Hamiltonian Monte Carlo already worked sufficiently well for a wide range of PDEs without requiring much fine-tuning. We highlight that pdPINNs do not directly depend on MCMC as a sampler, and alternative sampling methods such as modern variational inference schemes (Rezende & Mohamed, 2015) can also be directly used as a substitute.
For details regarding the samplers used and implementation we refer to the Experiments section 5 and Appendix section A.1.
5 EXPERIMENTS
In this section we demonstrate the advantages of pdPINNs compared to uniform sampling, importance sampling (Nabian et al., 2021) as well as the adaptive refinement methods RAR (Lu et al., 2021) and OT-RAR (Tadiparthi & Bhattacharya, 2021). Despite the term uniform sampling, we rely in all our experiments on quasi-random Sobol sequences for more stable behavior in the low samples regime. To guarantee a fair comparison, we considered slight variations of the proposed implementations of RAR and OT-RAR, so that only a limited number of collocation points are used. For the pdPINNs we consider multiple MCMC schemes, including inverse transform sampling (IT-pdPINN), MetropolisHastings (MH-pdPINN), and Hamiltonian Monte Carlo (HMC-pdPINN) methods.
The models in sections 5.1 and 5.2 are implemented in PyTorch (Paszke et al., 2019), with a custom Python implementation of the MH and Inverse Transform samplers. For the Fokker-Planck experiment in section 5.3, we make use of the efficient MCMC implementations provided by TensorFlow probability (Abadi et al., 2016; Lao et al., 2020) and the utilities of the DeepXDE library (Lu et al., 2021). More details, as well as further experiments comparing the wall-time of the various samplers, are provided in the Appendix with the code being provided in the supplementary material.
5.1 MASS CONSERVATION FOR SIMULATED PARTICLES
As a challenging prediction task we consider a setting motivated by the real world problem of modelling bird densities and velocities measured from a set of weather radars (Dokter et al., 2011; Nussbaumer et al., 2019; 2021) – or more generally the area of radar aeroecology. A non-steady compressible fluid in three dimensions is simulated by propagating fluid parcels through a pre-defined velocity field, i.e. the fluid is simulated using the conservation of mass as the underlying PDE (see Eq. 12). To provide the network with training observations, we introduce a set of spatially fixed sensors (comparable to radars) which count over time the number of fluid parcels within a radius r and over 21 contiguous altitude layers. Another disjoint set of sensors is provided for the validation set while the test performance is evaluated on a grid. The birds-eye view of the setting is shown in Figure 2a, where circles indicate the area covered by the radars. Figure 2b additionally shows the 3D simulated data projected along the z-axis and over time. In the Appendix section A.3 we describe the data generation and training setting in detail and provide the corresponding code in the supplementary.
For modeling the density and velocity, two sinusoidal representation networks (SIREN) (Sitzmann et al., 2020) ρΘ1(t,x) and vΘ2(t,x) are used, which are then regularized by enforcing the continuity equation for the conservation of mass (see Eq. 12). To showcase the sample efficiency of pdPINNs, experiments are performed over a wide range of collocation points (256 to 65536). In each setting the PDE-weights w2 (see Eq. 5) were selected with a grid search based on the highest 1st quartile R2 in a validation set. The resulting box-plots of the test R2 are provided in Figure 3, where the “Baseline” corresponds to training without any PDE loss. The proposed pdPINN approach clearly outperforms alternative (re-)sampling methods across all numbers of collocation points. Already with very few collocation points (512) pdPINNs achieve results that require orders of magnitude more points (32768) for uniform sampling. Finally, we observe that the performance gap shrinks as the number of collocation points increases, eventually converging to the same limiting value. Even when getting close to the memory limit of a NVIDIA Titan X GPU, other sampling strategies at best achieve comparable results with pdPINNs. In the Appendix (Figure A.6) we provide an additional qualitative comparison of the mass conservation between OT-RAR and MH-pdPINN 2048 samples.
As an additional experiment we simplified the setting by projecting the data onto the xy-axis, i.e. the birds-eye view, which is a common setting for geostatistical data (e.g. in Nussbaumer et al. (2019)). The results in this 2D setting, which are provided in the Appendix (Figure A.8) and described in details in section A.3, are very similar in nature to the 3D setting, although with a smaller performance gap with respect to alternative sampling methods. This decrease of the gap is to be expected, as the lower dimensional space is much easier to explore with uniform proposals.
5.2 HEAT EQUATION
We further consider a 2D diffusion problem, namely the heat equation introduced in section 3, where randomly distributed sensors provide measurements of the temperature. We focus on a general setting with the initial conditions being zero temperature everywhere except for a specified region, as shown in Figure 4a, and we let the system evolve for t ∈ [0, 0.2]. The networks are only provided sensor measurements of the temperature; for further details see the Appendix section A.4.
Temperature predictions for PINNs with uniform sampling and pdPINNs are illustrated in Figure 4b and 4c, respectively, with the ground truth in Figure 4a. We can observe that the uniform sampling strategy does not allow to focus on the relevant parts of the domain, i.e. regions with high temperature, and that it visibly fails to reconstruct the temperature profile. In contrast, the pdPINN promotes sampling in regions of higher density and predicts the true temperature more reliably. We also evaluate quantitatively the performance of the two approaches in terms of the R2 test error over the predicted temperature and illustrate the results in the Appendix section A.4, where we again observe the same convergence between uniform sampling and pdPINNs for high numbers of collocation points.
5.3 FOKKER-PLANCK EQUATION
For a demonstration of a forward problem, i.e. a setting without any observed data but only initial conditions, we solve the Fokker-Planck (FP) equations in a setting where an analytical solution is available (cf. Särkkä & Solin (2019)). The FP equations describe the evolution of the probability density of the movement of Brownian particles under a drift. More specifically, assume we are given particles at time t0, which are distributed according to p(t0, x). Let the movements of these particles be described by the following stochastic differential equation, where Wt denotes the standard Wiener process:
dXt = µ(t,Xt) dt+ σ(t,Xt) dWt (15)
with known drift µ(Xt, t) and diffusion coefficient D(Xt, t) = σ2(Xt, t)/2. The FP equation for the probability density p(t, x) of the random variable Xt is then given by
∂ ∂t p(t, x) = − ∂ ∂x [µ(t, x)p(t, x)] +
∂2
∂x2 [D(t, x)p(t, x)] . (16)
We train a network to predict the (probability) density pΘ(t, x) given a known sinusoidal drift and constant diffusion, which are discussed in detail in the Appendix. Data is only provided for the initial condition, and the PDE loss is based on Eq. 16 within the space Ω = [−.1.5, 1.5] and time t ∈ [−1, 1]. As the analytical solution is available in form of a probability density, we can estimate the KL divergence KL(p||pΘ) to evaluate the performance. Furthermore, we can sample collocation points from the true particle distribution p(t, x) (referred to as “p(t, x) as sampler”), offering a “best case scenario” of pdPINNs. A total of 5000 collocation points were used, and weights were manually tuned based on the error on a validation set. Figure 5a shows the evolution of KL divergence during training, highlighting that pdPINN based methods require fewer steps to achieve a low divergence. In addition, sampling from the true particle distribution leads to the fastest improvement and the lowest divergence after 30000 training steps. A qualitative comparison of the results is given in Figure 5b, showing that RAR and uniform sampling fail to propagate the sine wave forward. The ground truth of the problem and wall-times for different methods are given in the Appendix section A.5.
6 CONCLUSION
In this work, we introduced a general extension to PINNs applicable to a great variety of problem settings involving physics-based regularization of neural networks. In order to overcome the limitations of classical mesh-based Eulerian PINNs, we introduce a novel PDE loss that is defined with respect to the particle density in rather general types of PDEs. By employing MCMC methods to sample collocation points from the density approximated by the network, we derive an efficient and easy-to-implement improvement for providing a more appropriate regularization objective in PINNs. In particular, our new pdPINNs are completely mesh-free, thereby overcoming severe efficiency problems of classical PINNs in high-dimensional and sparse settings. Further, the absence of a mesh allows us to elegantly handle settings with uncertain or unknown domain boundaries.
As we have demonstrated, our method is applicable to a wide spectrum of PDEs, ranging from hydrodynamic flow problems to electro- and thermo-dynamic problems, as well as more general applications of the Fokker-Planck equations.
A APPENDIX
A.1 BACKGROUND SAMPLING FOR PDPINNS
At initialization, the network prediction ρΘ is random and thus does not carry any useful information, i.e. sampling from this density would be meaningless. Therefore, we start training the pdPINNs with a warm-up phase in which samples are obtained from a pre-specified background distribution:
x ∼ pbg(t,x) = p(t)pbg(x|t) (17)
with p(t) = U(0, T ). To avoid introducing a mesh, we could rely on the previously estimated Gaussian distribution introduced in Section 4, i.e. pbg(x|t) = pgauss(x). As a second alternative, approach we consider random linear combinations of the convex hull of {x(i)}Ni=1 spanned by c data points summarized as rows of matrix Z ∈ Rc×d. This leads to x = mZ with weight m ∈ Rc which can be drawn from a Dirichlet distribution, i.e. m ∼ Dir(α = 1). Of course, a uniform sampling mechanism on a defined region is also suitable and the definitive choice depends on the data and PDE at hand. However, we found that all of these methods work well in practice.
We initially draw all samples from the background distribution, and then slowly increase the proportion of samples obtained from the particle density, as we found that leaving some background samples slightly helps in the training.
A.2 IMPLEMENTATION OF RAR AND OT-RAR
For our comparison, we considered the adaptive refinement methods RAR and OT-RAR, proposed by Lu et al. (2021) and Tadiparthi & Bhattacharya (2021, preprint). Both methods rely on consecutive refinements of a fixed grid in the initial proposal. The number of collocation points is steadily increased and collocation points once added will not be removed. To allow for a fairer comparison, we adapt both methods to use a limited budget of points, and in addition we regularly resample them. This leads to a slightly modified version of the methods which is similar in spirit. For learning the linear mapping proposed by Tadiparthi & Bhattacharya (2021), we rely on the PyOT (Flamary et al., 2021) implementation of Knott & Smith (1984). The pseudo-code for sampling a set of collocation points is given in Algorithm 1 and Algorithm 2. The required input fΘ refers to the PDE approximated by the network, as discussed in Section 1. For more specific details on the methods we refer to the original papers.
Algorithm 1 Adapted RAR Input: fΘ, uniform distribution UB,
number of col. points k, previous col. points Xprev.
Xprop ← [x1,x2, . . . ,xk]T with xi ∼ UB ▷ Sample proposals Xcomb ← concat(Xprev, Xprop) ▷ Concatenate old and new points Xnew ← topk(Xcomb, ||fΘ(Xcomb)||22, k) ▷ Keep top k proposed points based on fΘ
Output: Xnew
A.3 EXPERIMENTS: CONSERVATION OF MASS
In the supplementary material we provide code in Python for the data generation and for the pdPINN model. Below we provide the details for all the experiments we conducted. Furthermore, we provide short videos showing the predicted density movements for each different approach. More details on this can be found in the README.html provided in the supplementary files.
All experiments were run on a computing cluster using Nvidia GeForce GTX Titan X GPUs with 12 GB VRAM. Settings that required more memory were run on a RTX8000 with 48GB VRAM. Up to 16 Titan X GPUs could be used in parallel, or 4 RTX8000. In most settings, training in each experiment took less than 10 minutes.
Algorithm 2 Adapted OT-RAR Input: fΘ, uniform distribution UB,
number of col. points k, number of points for empirical distribution j < 2k, previous col. points Xprev.
Xprop ← [x1,x2, . . . ,xk]T with xi ∼ UB ▷ Sample proposals Xcomb ← concat(Xprev, Xprop) ▷ Concatenate old and new points
Xtarget ← topk(Xcomb, ||fΘ(Xcomb)||22, j) ▷ j samples for target empirical distribution Xsource ← [x1,x2, . . . ,xj ]T with xi ∼ UB ▷ j samples for source empirical distribution
MOT ← LinOT(Xsource, Xtarget) ▷ Obtain linear operator that maps to target distribution
Xnew ← [x1,x2, . . . ,xk]T with xi ∼ UB ▷ Sample uniformly Xmap ←MOT(Xnew) ▷ Map samples to target distribution
Output: Xmap
A.3.1 ADDITIONAL EXPERIMENTAL RESULTS
3D Setting. Figure A.6 showcases the projection of the density in the onto the z axis for a random run of the OT-RAR method and the Metropolis-Hastings based pdPINN when using 2048 collocation points. The OT-RAR PINN shows disconnected density predictions that clearly violate mass conservation, whereas the Metropolis Hastings based pdPINN is capable of mostly preserving it. The boxplot in Figure A.8 highlights the difference in required number of collocation points of
2D Setting. As mentioned in Section 5, we repeated the Conservation of Mass experiment in a slightly altered setting, where the data is projected onto the xy-plane, reducing it to a 2D+Time problem. The general setup is similar to the 3D setting, although a smaller network and different training parameters are used, which are listed in the following sections below.
A.3.2 DATA GENERATION
Here we provide a more detailed description for the generated data, namely the used velocity field, and the method for obtaining simulated “radar measurements”.
Velocity field. The velocity field in the xy-plane was generated from a scalar potential field Φ : R2 → R and the z-component of a vector potential a : R2 → R. Through the Helmholtz decomposition1 we can construct the velocity field vxy : R2 → R2:
vxy ([ x y ]) = −∇Φ+ [ δa/δy −δa/δx ] . (18)
For both experiments the following fields were used:
Φ ([ x y ]) = −1 2 (x− 2) · (y − 2), (19)
a ([ x y ]) = −1 5 exp ( − (2 3 x )2 − (2 3 y )2) . (20)
The derivatives were obtained using the symbolic differentiation library SymPy (Meurer et al., 2017). To add a nonsteady component, the resulting velocity field is modulated in amplitude as a function of time t ∈ [0, 3]:
vxyt
( t, [ x y ]) = vxy ([ x y ])( 3
2 ∣∣∣∣sin(23πt )∣∣∣∣+ 0.05) . (21)
The z (altitude) component of the velocity only depends on time and is given by: vz(t) = 1.6 · sin ( 4
3 πt
) . (22)
Simulation. For the initial distribution of the fluid, the particle positions were drawn from Gaussian mixtures. For t ∈ [0, 3], these particles were simulated using the above constructed velocity field. Overall, the paths of the roughly 240000 parcels were simulated using a basic backward Euler scheme.
1This is the 2D formulation of the Helmholtz decomposition, where the vector potential has non-zero components only along the z-axis as in a3d = [0, 0, a]T . The full decomposition is commonly written as v3d = −∇Φ3d +∇× a3d.
Measurements. The measurements at the sensors were obtained by counting the number of particles within a given radius over multiple timesteps. The density corresponds to the mass divided by the sensor area, and the velocity is an average over all the particle velocities. For the training data additional zero-mean isotropic Gaussian noise is added to all measurements. In the 3D setting, data measurements of density and velocity are obtained by 132 sensors on the xy-plane, within region [−3, 3]2 at 11 equidistant timesteps. In the 2D setting, the same set of sensors is used.
A.3.3 ARCHITECTURE AND TRAINING
In both experiments, the networks for density ρΘ1 and velocity vΘ2 prediction (parameterized by Θ1 and Θ2, respectively) are fully-connected layers with sinusoidal activation functions, as proposed by Sitzmann et al. (2020). The number of layers and units for each setting is shown in Table A.1. The sine frequency hyperparameter required in the SIREN architecture was tuned by hand according to the validation loss of the baseline model (i.e. without a PDE loss), leading to a sine-frequency of 12 for the 2D setting, and 5 for the 3D setting. We note that the proposed default value of 30 in Sitzmann et al. (2020) heavily overfits our relatively low-frequency data and we thus recommend an adjustment of this hyperparameter for usage in PINNs.
For training the network, the ADAM optimizer (Kingma & Ba, 2014) with a learning rate of 8×10−4 (2D Setting) or 10−4 (3D Setting) was used. The learning rate was multiplied by a factor of 0.99 each epoch. All models were trained for 300 (3D setting) or 500 (2D setting) epochs. The 2D setting was trained using full-batch gradient descent, whereas for the 3D setting we used a mini-batch size of 6931. In all experiments we trained and evaluated on 10 different random seeds.
A.4 EXPERIMENTS: HEAT EQUATION
The dataset for the heat equation experiment was generated by numerically solving the heat equation through the finite difference method, precisely the Forward Time, Centered Space (FTCS) approximation (Recktenwald, 2004). We used Dirichlet boundary conditions in form of zero temperature around a squared shape far away from the relevant domain. These boundary conditions are not provided to the PINNs for a slightly more difficult setting. Overall, the dataset is composed of 1000 training points, 1971120 test points and 492780 validation points. We made sure training points contained enough information about the initial condition, i.e. we selected a sufficient amount of points around the initial source of non-zero temperature. In contrast, validation and test points are taken uniformly in time and space. During the warm-up phase of the pdPINN training, collocation points were sampled uniformly, and afterwards 90% of the samples were drawn from the particle density distribution, which is proportional to the modeled temperature. Collocation points were re-sampled every 500 epochs. Differently from previous experiments, the employed architecture is a fully-connected two-layer neural network with 32 hidden units and tanh activations. The implementation is in PyTorch (Paszke et al., 2019), using the ADAM optimizer (Kingma & Ba, 2014) combined with an exponential learning rate scheduler which multiplies the learning rate by a factor of 0.9999 at each epoch, starting with a rate of 10−4 and decreasing it until reaching a minimum value of 10−5. Training was terminated through early-stopping, as soon as the validation R2 didn’t improve for more than 3000 epochs.
Additional results. Figure A.9 illustrates the test R2 of the predicted T averaged over 20 different seeds. Error bars correspond to 95% confidence interval for the mean estimation, based on 1000 bootstrap samples, while colors indicate the different PDE weights w2 explored. As in previous settings, we show that with few samples (16) the regularization enforced by the PDE loss is not strong
enough, leading to comparable results in both approaches (as expected). Hence PINNs and pdPINNs show similar results in this regime. However, as the number of samples increases (32-64-128-256), the PDE loss enforced by the proposed pdPINNs quickly and steadily outperforms uniform sampling. Lastly, we also verified that in the limit of high samples (512-1024) the two sampling strategies converge, as in such a low-dimensional domain the uniform samples fully and densely covers the considered area. This, again, is in line with the observed results of the other experiments.
A.5 EXPERIMENTS: FOKKER-PLANCK EQUATIONS IN TENSORFLOW
Within the Fokker-Planck experiment we showcase the different training behaviors of uniform sampling, RAR, and multiple MCMC samplers. Due to the low dimensionality of the problem, we additionally consider a Inverse-Transform (IT) sampler (Steele, 1987) for efficiently sampling from the density. The IT sampler relies on the empirical cdf estimated via uniform samples drawn over the whole domain. This method does not require building up a Markov Chain, and is thus very fast, but only works well in low dimensions.
More specifically, we compare the following methods for selecting collocation points, with a highly efficient implementation of the MCMC methods provided by TensorFlow probability:
I.) Uniform sampling II.) Residual Adaptive Refinement (Lu et al., 2021)
III.) pdPINN with Inverse-Transform (IT) sampling (Steele, 1987) IV.) pdPINN with Metropolis-Hastings (MH) MC with parallel tempering (Earl & Deem, 2005) V.) pdPINN with Hamiltonian MC (HMC) with parallel tempering (Earl & Deem, 2005) and
dual averaging step-size adaptation (Hoffman et al., 2014, section 3.2)
A.5.1 SETTING AND ANALYTICAL SOLUTION
We consider the following setting over the time interval [t0, tn] = [−1, 1] with drift function µ, noise σ and initial particle positions p(x|t = t0) given by
µ(Xt, t) = µ(t) = sin (10t) (23) σ(Xt, t) = σ = 0.06 (24)
p(x|t = t0) = N (0, 0.022 · Id) (25)
The PDE has an analytical solution (cf. Särkkä & Solin (2019)) which is given by
p(x|t) = N (µs(t), σ2s(t)) (26) p(t) = U(t0, tn) (27)
µs(t) = − cos(10t)
10 +
cos(10)
10 (28)
σ2s(t) = 0.0036t+ 0.004. (29)
For evaluating the deviation of our prediction to the solution, we evaluate the KL divergence between the analytical solution and the network approximation KL(p(x, t)|p̂Θ(x, t)) by sampling 10000 points from the true p(x, t).
A.5.2 SETUP
We use a SIREN network and additionally sample (5000) collocation points at the initial time-step, which is the default behavior of DeepXDE. An overview of the architecture and training details is given in Table A.2. Experiments were performed with a NVIDIA GeForce RTX 2080 Ti and an Intel(R) Xeon(R) CPU E5-1660 v3 @ 3.00GHz processor.
A.5.3 WALL TIME
The wall times for the different methods are provided in Figure A.10. Although Metropolis-Hastings and Hamiltonian Monte Carlo require more time per step compared to uniform sampling, the used inverse transform sampling achieves a similar speed. | 1. What is the focus of the paper regarding sampling methods for particle positions?
2. What are the claimed advantages of the proposed approach compared to traditional methods?
3. Do you have any concerns or difficulties in understanding the author's motivation and solution?
4. How would you assess the clarity and quality of the paper's content?
5. Are there any novel aspects or ideas presented in the paper?
6. Can you reproduce the results of the paper? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors claim to propose to sample directly from the distribution over the particle positions, eliminating boundaries while adaptively focusing on the most relevant regions. It looks higher sample efficiency and improved performance of PINNs.
Strengths And Weaknesses
Weaknesses: It looks that I do not understand what the authors' motivation and what they do to solve. Does mesh-free fluid dynamics really work?
Clarity, Quality, Novelty And Reproducibility
The paper does not present clearly. I have not found any novelty. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.